When running on standard X86 processors, graph databases had very bad cache line utilization, with only 8 bytes or less of a 72 byte cache line being used over 80 percent of the time a graph database was running. ![]() When Intel started out designing the PIUMA chip, according to Howard, the researchers working on the HIVE project realized that graph jobs were not just massively parallel, but embarrassingly parallel, which means there were probably some ways to exploit that parallelism to drive up the performance of graph analytics. And presumably the price/performance is as well, and for the love of all that is holy in heaven, perhaps Intel will commercialize this PIUMA system and really shake things up. What’s old is new again with the PIUMA project, and this time the processor is more modest but the interconnect is much better. As far as Linux was concerned, this looked like a single CPU. The XMT line from a decade ago culminated with a massive shared memory thread monster that was perfect for graph analysis, which had up to 8,192 processors, each with 128 threads running at 500 MHz, plugging into an AMD Rev F socket used by the Opteron 8000 series of X86 CPUs all lashed together with a custom “SeaStar2+” torus interconnect that delivered 1.05 million threads and 512 TB of shared main memory for a graph to stretch its legs upon. In the IEEE paper, the PIUMA researchers made no bones about the fact that they were absolutely inspired by the Cray XMT line. But this week, Jason Howard, a principal engineer at Intel, gave an update on the PIUMA processor and system, including a photonics interconnect that Intel has created in conjunction with Ayar Labs to lash an enormous number of processors together. These were vague about the architecture of the PIUMA system. In August 2019, Intel gave an update on the PIUMA chip at DARPA’s ERI Summit, and at the IEEE’s High Performance Extreme Computing 2020 event in September 2020, Intel researchers Balasubramanian Seshasayee, Joshua Fryman, and Ibrahim Hur gave a presentation called Hash Table Scalability on Intel PIUMA, which is behind an IEEE paywall but which gave an overview of the processor, and a paper called PIUMA: Programmable Integrated Unified Memory Architecture, which is not behind a paywall. Intel was chosen to make the HIVE processor and Lincoln Laboratory at MIT and Amazon Web Services were chosen to create and host a trillion-edge graph dataset for a system based on such processors to chew on.Īt Hot Chips 2023 this week, Intel was showing off the processor it created for the HIVE project, which was originally codenamed “Puma” in relation to the Programmable Integrated Unified Memory Architecture (PIUMA) that underpins it. ![]() The US Defense Advanced Research Projects Agency, the research and development arm of the Department of Defense, explores just such cutting edge questions, and has been looking into creating a massively parallel graph processor and interconnect since establishing the Hierarchical Identify Verify Exploit (HIVE) project back in 2017. And given this, maybe there are better ways to do this. While GPUs excel at dense matrix high precision floating point math that dominates HPC simulation and modeling, a lot of the data that underpins AI frameworks is sparse and lower precision to boot. Gasp! Who speaks such heresy in a world where the Nvidia GPU and its wannabes are the universal unguent to solve – salve, surely? – our modern computing problems? Well, we do. ![]() Given that many of the neural networks that are created by AI frameworks are themselves graphs – the kinds with vertices with data and edges showing the relationships between the data, not something generated in Excel – or output what amounts to a graph, maybe, in the end, what we need is a really good graph processor. If you look back at it now, especially with the advent of massively parallel computing on GPUs, maybe the techies at Tera Computing and then Cray had the right idea with their “ThreadStorm” massively threaded processors and high bandwidth interconnects.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |