In Apache Zeppelin prior to 0.8.0 the cron scheduler was enabled by default and could allow users to run paragraphs as other users without authentication.
In all versions of Apache Spark, its standalone resource manager accepts code to execute on a 'master' host, that then runs that code on 'worker' hosts. The master itself does not, by design, execute user code. A specially-crafted request to the master can, however, cause the master to execute code too. Note that this does not affect standalone clusters with authentication enabled. While the master host typically has less outbound access to other resources than a worker, the execution of code on the master is nevertheless unexpected.
In Apache Karaf version prior to 3.0.9, 4.0.9, 4.1.1, when the webconsole feature is installed in Karaf, it is available at .../system/console and requires authentication to access it. One part of the console is a Gogo shell/console that gives access to the command line console of Karaf via a Web browser, and when navigated to it is available at .../system/console/gogo. Trying to go directly to that URL does require authentication. And optional bundle that some applications use is the Pax Web Extender Whiteboard, it is part of the pax-war feature and perhaps others. When it is installed, the Gogo console becomes available at another URL .../gogo/, and that URL is not secured giving access to the Karaf console to unauthenticated users. A mitigation for the issue is to manually stop/uninstall Gogo plugin bundle that is installed with the webconsole feature, although of course this removes the console from the .../system/console application, not only from the unauthenticated endpoint. One could also stop/uninstall the Pax Web Extender Whiteboard, but other components/applications may require it and so their functionality would be reduced/compromised.
From version 1.3.0 onward, Apache Spark's standalone master exposes a REST API for job submission, in addition to the submission mechanism used by spark-submit. In standalone, the config property 'spark.authenticate.secret' establishes a shared secret for authenticating requests to submit jobs via spark-submit. However, the REST API does not use this or any other authentication mechanism, and this is not adequately documented. In this case, a user would be able to run a driver program without authenticating, but not launch executors, using the REST API. This REST API is also used by Mesos, when set up to run in cluster mode (i.e., when also running MesosClusterDispatcher), for job submission. Future versions of Spark will improve documentation on these points, and prohibit setting 'spark.authenticate.secret' when running the REST APIs, to make this clear. Future versions will also disable the REST API by default in the standalone master by changing the default value of 'spark.master.rest.enabled' to 'false'.
In Apache httpd 2.2.0 to 2.4.29, when generating an HTTP Digest authentication challenge, the nonce sent to prevent reply attacks was not correctly generated using a pseudo-random seed. In a cluster of servers using a common Digest authentication configuration, HTTP requests could be replayed across servers by an attacker without detection.
In Apache OpenMeetings 3.0.0 - 4.0.1, CRUD operations on privileged users are not password protected allowing an authenticated attacker to deny service for privileged users.