Okay, perhaps I can help clarify how the task actually executes, and then that may put this in better perspective.
The patching happens in what we might call quasi-parallel mode. It does not do all 600 systems simultaneously; in fact, a single Patch Manager server can do about 10-12 systems simultaneously. Assuming 10-15 minutes to install updates on a single target, it's possible with a single Patch Manager server in its default configuration to patch a few dozen systems in an hour, more if the installation cycle runs faster on some or all of the systems. This scenario also presumes that the updates have been downloaded to the client systems prior to the installation task; if the updates have not been downloaded, cut that number by half, maybe more. That is to say, if the task includes downloading updates, you'll likely see no more than a couple dozen systems patched in an hour, as the majority of the time will be consumed doing file transfers from the WSUS server.
There are a number of ways in which we can increase this parallelism. If the Patch Manager server is installed on a multi-core/multi-socket system, you can increase the number of worker processes and/or threads that are used to execute a task. By default the Patch Manager server is configured to run two worker processes with a thread pool size of 16. The thread pool can be increased to 256 per process, and the server can be configured with up to 8 worker processes. The objective is to increase the thread pool and number of worker processes up to the point that you achieve maximum CPU utilization without running out of process memory space. (Paging processes/threads to disk will destroy any benefits achieved from launching more connections to client systems.)
Another option is to deploy additional Patch Manager Automation Role servers. The Automation Role is the service that initiates and monitors the task execution. If you're patching systems on remote sites, there will be significant benefit in having an Automation Role server on the local network. If you're patching a large number of systems in a single site, a pool of Automation Role servers can exponentially increase the parallelism of the patch deployment. In one case study, a data center with over 700 servers is being patched in four one-hour cycles of approx 200 servers each cycle, using a pool of four Automation Role servers. Each Automation Role server patches about 50-60 systems in an hour. The important note here is that to successfully patch several hundred systems within a specified time frame will require an architected solution with baselining and performance management, as well as a strong awareness of the work to be done during that installation cycle.
Regarding the relationships of the file transfers from Microsoft to WSUS to clients. When you approve an update, the WSUS server queues that update's file(s) for download from Microsoft. Depending on the number of updates approved, and the available Internet bandwidth, this download task may last from a few minutes to several hours. Once an update's file(s) have been successfully downloaded to the WSUS server, that update is then available for download by the WSUS client systems. However, this download event does not occur immediately. It occurs in a staggered fashion, as each client executes its regularly scheduled detection event looking for new updates. The default detection interval is 22 hours (which functionally is something between 17.6 and 22.0 hours), so from a practical perspective approximately 5% of your systems will launch download tasks for an update during each hour for the 24 hours after you approve the updates. As noted above, the objective is merely to ensure that the clients have completed those downloads prior to launching the installation task, or else accept that the download will occur as a part of the installation task, and adversely affect the number of clients that can be patched in a given time frame.
Looking at the Task History Details you've provided I also see another significant impact of execution time, and that's the launch of the pre-installation reboot. For every system performing a pre-installation reboot, add another few minutes to the per-system execution tally. With pre-installation reboots, I would expect per-system execution times in the 15-20 minute range, and therefore no more than a couple dozen completions per hour. We see that 12 systems were issued reboot commands at task launch (5:12am). If we look at a couple of these examples we can get some empirical indications of expected performance. Machine 'ooo311bh.com' executed a pre-installation reboot at 5:12am, and a post-installation reboot at 5:41am, so in this case the installation itself took approximately 25 minutes. A more positive example, '000313bh.com' has a pre-reboot at 5:12am and a post-reboot at 5:19am, completing the installation of updates in only a few minutes. Using these "Pre-Reboot" and "Post-Reboot" events alone actually gives you a very good trace through the task history to see how many machines were being processed during each hour and how long the installations (on a per-machine basis) actually executed.
For a more per-machine chronological look, you can sort by [1] JobID and [2] Completion Time to see the impact on each machine individually.
The other observation I'll make is that many of these machines are installing a large collection of Microsoft .NET Framework updates ... there were seven in total available for installation ... and as noted in my previous reply, .NET Framework updates are notoriously time consuming and I don't find it surprising at all that installing a half-dozen .NET updates consumed 20-30 minutes on any given machine.