Although the weather in northern California has cooled down in mid-August, the upcoming Hot Chips Conference, held at the Flint Center at DeAnza College in Cupertino, Calif., Aug. 20-22, promises to heat things up. Presentations will cover the latest high-performance graphics engines, compute engines, field-programmable gate array accelerators, and other application processors, including one presentation detailing a five microwatt self-timed microcontroller powered by energy harvesting approaches. Two tutorials on Sunday, Aug. 20 covered the new P4 language and hardware implementation issues for Software Defined Networks in the morning, and End-to-End Autonomous Vehicle Platforms in the afternoon.
Due to the solar eclipse on Monday, Aug. 21, after the opening paper from Microsoft detailing its Scorpio processor in its forthcoming Xbox One X system, a short break will take place for eclipse viewing. After the conference resumes, AMD and Nvidia will describe their respective high-performance graphics engines, the Vega 10 and Volta. The remainder of the morning session includes a paper by SiFive detailing the company’s Freedom system-on-chip processors based on the open-source RISC-V CPU core. Following that presentation, ETA Compute will show off its self-timed ARM M3-based microcontroller that consumes just 5 microwatts, thus allowing it to be powered by various energy-harvesting power sources.
Monday afternoon will kick off with a keynote speech discussing “The Direct Human/Machine Interface and Hints of a General Artificial Intelligence” presented by Dr. Phillip Alvelda, now at Wiseteachers.com. Following his presentation are two papers covering autonomous vehicle technology, one by Renesas Electronics Corp. and the other by Swift Navigation. About eight poster papers on various topics will be hosted during the afternoon coffee break – topics include “Using texture compression hardware for neural network inference; Sound tracing: real-time sound propagation hardware accelerator; A memory efficient persistent key-value store on eNVM SSDs; Accelerating big data workloads with FPGAs; Loom, a precision exploiting neural network accelerator; Epiphany-V, a TFLOPS-scale 16 nm 1024-core 64-bit RISC array processor; Fully-integrated surround vision and mirror replacement SoC for ADAS/automated driving; and GRVI Phalanx – a 1680 core, 26 Mbyte RISC-V FPGA based parallel processor.
Closing out the day on Monday, three processor presentations from Baidu/Intel, UCSD/Cornell/University of Michigan, and ThinCI provide some insights into highly parallel solutions. The Baidu/Intel paper details a programmable FPGA accelerator that handles diverse workloads, the UCSD et al paper details Celerity, a tiered accelerator fabric based on the open-source RISC-V processor, and the ThinCI presentation shows off a graph streaming processor the company deems a “next-generation computing architecture.”
Kicking of the second day of the Hot Chips conference will be FPGA papers by Xilinx, Altera/Intel, and a second Xilinx paper, with a paper by Amazon completing the foursome. The first Xilinx presentation details the monolithic integration of RF data converters on a programmable fabric using 16-nm FinFETs for digital-RF communications applications. Following that, Altera/Intel will show off a 14-nm heterogeneous FPGA system-in-package that forms a platform for system-level integration. The second Xilinx presentation highlights a 16-nm FPGA family that incorporates high-bandwidth memory modules and targets datacenter applications. Lastly Amazon will show how it uses FPGAs to accelerate computing subsystems in its AWS F1 instances.
Following the morning break on Tuesday, attention turns to neural networks with presentations from Wave Computing discussing a dataflow processing chip for training deep neural networks, and another from Microsoft discussing the acceleration of persistent neural networks at the scale of datacenters. Google follows these two papers with a keynote presentation examining recent advances in artificial intelligence via machine learning and its implications for computer system design. Additional presentations on neural networks follow the lunch break with papers from Harvard/ARM Research, KAIST, and Google. Harvard/ARM will show off a deep neural network inference engine, KAIST will also detail a deep neural network processor with on-chip stereo matching, and Google will provide a performance analysis of its Tensor processing unit for its AI algorithms.
The last sessions of the conference focus on processor architectures, including Cisco’s 400 Gbit/s multicore network processor, and ARM’s DynamicIQ–a processor employing cluster-based multiprocessing. Last but not least, the final four papers spotlight some extreme performance processors–as IBM takes us through its z14 microprocessor chip set, AMD highlights its next-generation enterprise server processor architecture. Intel will explore its recently released Xeon scalable processor (formerly Skylake-SP), and Qualcomm will dive into its Centriq 2400 processor. The Centriq processor, also known as Falkor, is based on the 64-bit ARM V8 compliant architecture and was designed for cloud-computing applications.
For program details or at-conference registration, go to hotchips.org
Semiconductor Technology Editor