Almost always means a node is broken / blocked / unable to schedule pods, which prevents DNS from deploying.
That's the weird thing though. DNS is deployed, and all the nodes are happy according to "oc get nodes".
It seems that the operator is misreporting the error. In the console dashboard it has a number of alerts that seem out of date, that I'm not able to clear too.
The dns-default DaemonSet says that 7 of 7 pods are ok.
Is there a way to reboot/re-initialise a "stuck" operator?