We provide a simulation study complementing the theoretical results of Bosand Schmidt-Hieber (2021) for supervised classification using deep neuralnetworks. Theirmain risk boundsuggests a faster...Show moreWe provide a simulation study complementing the theoretical results of Bosand Schmidt-Hieber (2021) for supervised classification using deep neuralnetworks. Theirmain risk boundsuggests a faster truncated Küllback-Leiblerdivergence risk convergence rate with smoother conditional class probabilityfunctions and when fewer conditional class probabilities are near zero; aswell as that convergence rate is fast when the functions have a high degree ofsmoothness even if many probabilities are near zero. The proportion of smallconditional class probabilities can be measured bysmall value boundindex훼.We calculate훼for an illustrative selection of settings with conditional classprobability functions that have an arbitrarily high Hölder smoothness index훽.We estimate the Küllback-Leibler divergence risk convergence rate in thesesettings by evaluating networks trained on simulated datasets of various sizes.We find slower convergence rates than suggested by the main risk bound.However, in line with expectations,훼has no consistent effect on convergencerate when combined with arbitrarily high훽.Show less