@deelowe on r/google
Called the "corkboard." The reason for this is that underneath each server is a square piece of cork that supported the motherboard. They overheated and violated just about every possible regulation for electronics you could imagine. They were quickly pulled
After this, they had what was called the "breadrack." The first version had an interesting design where 2 servers shared a single PSU. There were multiple versions of this rack with the second iteration being extremely successful. The second version, which was the one that was extremely successful is hard to find pictures of. Google was extremely secretive about it's DC hardware at that time.
After the breadrack, Google pivoted to datacenter scale solutions integrating things like power and cooling solutions into the rack itself. There have been numerous versions of this.
These designs eventually made their way into the open compute specification and the entire industry is moving in this direction.
Google also tried different designs, but most never caught on, like a containerized DC concept. Google abandoned the project, but other companies continued with the idea and these types of designs are still produced by 3rd parties.