Due to concerns about the novel coronavirus and concerns about potential transmission and risk, The Next Platform team is regretfully announcing a postponement of the March 10th event.
We will be announcing our new date as soon as conditions are clearer and we feel it is safe to hold a large group gathering.
Sponsors and speakers have been informed and agree with this decision. It is our hope that you are also understanding of this change and its rationale.
We expect to preserve the exact same schedule seen below for the rescheduled date and have been assured by all speakers that they will be on board with the change.
Please stay tuned for the new date.
The Next Platform team.
Much of what sets The Next Platform apart from other tech publications is depth and analysis. As it turns out, the key to getting both of those facets is knowing what questions to ask and pushing for answers that go beyond the basic and cut through marketing and hype.
This time we are conducting interviews in a new format—and we want you involved in the process.
Please join us on March 10, 2020 at The Glasshouse in downtown San Jose, CA for an all-day event featuring the same in-depth conversations you expect from TNP (and from our sold-out Next AI Platform event last year), live on-stage followed by a cocktail reception and evening dinner opportunities for networking with key people defining the next generation of AI infrastructure.
Meet the Next Platform team with plenty of time to talk about what matters to you, get first access to exclusive interviews, and spend the day with us in an intimate setting at San Jose’s premier event venue, The Glasshouse.
No marketing, no hype, no PowerPoint presentations, no one-sided vendor material. Just some of the best interviewers in the high-end infrastructure space and a lineup of thought leaders building the next generation of large-scale infrastructure to support emerging AI workloads. More on our approach to this topic can be found here. Because like you, we have attended plenty of events that were enjoyable but offered presentations that were too general (or too specific), were pure marketing, or provided little in the way of real insight because of limited time for questions and conceptually, a format that did not encourage well-rounded delivery of insight.
Full agenda coming soon in addition to sneak peek below. Register at this pre-final schedule point as Super Early Bird to save on admission and guarantee a slot. Last year’s edition of The Next AI Platform (2019 program here) sold out.
The Next AI Platform will feature live in-depth interviews with those are the forefront on both the technology creation and end user sides of the AI infrastructure spectrum.
The full agenda is below, but here is a quick peek at just a few of the speakers/interviewees/panelists for the day…
Timothy Prickett Morgan (Host/Interviewer)
Timothy Prickett Morgan is Co-Founder and Co-Editor of The Next Platform and co-host of The Next AI Platform event on March 10, 2020 in San Jose. He brings 25 years of experience as a publisher, IT industry analyst, editor, and journalist for some of the world’s most widely-read high-tech and business publications including The Register, BusinessWeek, Midrange Computing, IT Jungle, Unigram, The Four Hundred, ComputerWire, Computer Business Review, Computer System News and IBM Systems User. Most recently, he was the Editor in Chief of EnterpriseTech.
Nicole Hemsoth is Co-Founder and Co-Editor of The Next Platform and co-host of The Next AI Platform event. In addition to her role analyzing the AI and semiconductor space, she brings insight from the world of high performance computing following most recently a career covering supercomputing hardware and software as former Editor in Chief of long-standing supercomputing magazine, HPCwire. She was founding editor and conceptual creator of the data-intensive computing magazine Datanami, as well as the conceptual creator and founding Senior Editor for the large-scale infrastructure focused EnterpriseTech.
Karl Freund is Moor Insights & Strategy’s lead analyst for HPC and deep learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging deep learning opportunity in datacenters, from competitive landscape to ecosystem to strategy. Freund is also a Forbes contributing author.
Paul Teich is Liftr Insights’ Principal Analyst. Paul has been a leading data center industry analyst for seven years. He is also fluent in artificial intelligence (AI), edge computing, and Internet of things (IoT). Prior to Liftr Insights, Paul was a Principal Analyst at TIRIAS Research and Senior Analyst for Moor Insights & Strategy. His career is focused on successfully commercializing technology-based products and services. Paul has been in high-tech for over three decades and brings knowledge from across many streams to the table.
David Kanter is Principal Analyst and Editor-in-Chief at Real World Tech, which focuses on microprocessors, servers, graphics, computers, and semiconductors. He is also a consultant specializing in intellectual property evaluation/development and technical/competitive analysis. Mr. Kanter has been quoted in the New York Times, CNN, Reuters, IEEE Spectrum, and many more over the last 10 years. Mr. Kanter was a Founder of Strandera, a startup commercializing advanced multi-core hardware and software technologies. Prior to Strandera, he was an early employee at Aster Data Systems (acquired by Teradata), a leader in data warehousing and analytic databases. He also was an Economic Analyst at the Huron Consulting Group.
Just a Few of Our Special Guests:
“Evolving Requirements for AI Inference Hardware: A Super-Investor Perspective”
Lip-Bu Tan has served as CEO of Cadence Design Systems, Inc. since January 2009 and has been a member of the Cadence Board of Directors since February 2004. He served as President of the company from 2009 to 2017. He also serves as chairman of Walden International, a venture capital firm he founded in 1987. Prior to founding Walden, Tan was Vice President at Chappell & Co. and held management positions at EDS Nuclear and ECHO Energy.
He is a member of The Business Council, is Chairman of the Board of SambaNova Systems, Inc, and serves on the board of directors of Hewlett Packard Enterprise Co., Schneider Electric, and Green Hills Software, as well as on the boards of the Electronic System Design Alliance (ESD Alliance) and the Global Semiconductor Association (GSA).
“Architectural Considerations for Next Generation Deep Learning”
Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control.
“The Evolution of Inference Systems at Facebook”
Misha Smelyanskiy is the Director of AI at Facebook. His team’s main mission is AI workload-driven SW&HW co-design via high-performance algorithmic, numerical and system-level optimizations, and performance modeling to deliver high-performance SW and HW solutions across key AI services for Facebook datacenter.
His expertise spans HPC, performance optimization and modeling, parallel computer architecture and application-driven HW/SW co-design (including ML and other domains). He has held engineering positions at Intel Labs and was previously a senior system architect and director of the company’s co-design group.
“Emerging Technology Requirements at Swisscom: AI from Datacenter to Edge”
Cornelius Heckrott is VP of Technology and Innovation at European telecommunications giant, Swisscom. From his base at Swisscom’s Cloud Lab he focuses on exploring and analyzing what might be next for device and software exploration. During his one-on-one interview he will talk about Swisscom’s technology requirements and goals and how his teams evaluate new technologies, from datacenter-class systems and elements down to the edge, which is of particular interest given Swisscom’s focus. Prior to Swisscom, Heckrott held engineering positions including ABB, Roke, and Siemens. He is a graduate of the University of California’s Haas School of Business.
“Using AI to Design AI Chips: EDA Next Area for Deep Learning Growth”
Elias is currently Engineering Group Director at Cadence Design Systems, a leading Electronic Design Automation company. He has been involved in EDA for more than 20 years from the founding of Neolinear, Inc, which was acquired by Cadence in 2004. Elias is currently co-Primary Investigator on the MAGESTIC project, funded by DARPA to investigate the application of Machine Learning to EDA for Package/PCB and Analog IC. Elias also leads an innovation incubation team within the Custom IC R&D group as well as other traditional EDA product teams. Beyond his work developing electronic design automation tools, he has led software quality improvement initiatives within Cadence, partnering with the Carnegie Mellon Software Engineering Institute. Elias graduated from Carnegie Mellon University with an M.S. and B.S. in Electrical and Computer Engineering.
“Building Vast, Scalable Conversational AI Platforms: Pain Points in Software, Hardware”
Chandra Khatri is a Senior AI Research Scientist at Uber AI driving Conversational and Multi-modal products and research at Uber. He has a wide experience of building scalable AI systems and taking the state of the art research to production. Prior to Uber, he was the Lead AI Scientist at Alexa and was driving the Science for the Alexa Prize Competition, which is a $3.5 Million university competition for advancing the state of Conversational AI. Some of his recent work involves Multi-modal and Embodied Understanding, Common sense and Semantic Understanding, Natural Language and Speech Processing, Open-domain Dialog Systems, and Deep Learning.
Prior to Alexa, Chandra was a Research Scientist at eBay, wherein he led various Deep Learning and NLP initiatives such as Automatic Text Summarization and Automatic Content Generation within the eCommerce domain, which has lead to significant gains for eBay. He holds degrees in Machine Learning and Computational Science & Engineering from Georgia Tech and BITS Pilani.
“Going Big on Inference Architecture: Chip Deep Dive/Use Cases”
Andy Hock is Director of Product at Cerebras Systems. Andy was the Senior Director of Advanced Technologies at Skybox Imaging, makers of high-resolution satellites, when the company was purchased by Google in 2014 for $500M. After the acquisition, he continued on as Product Manager at Google before joining the team at Cerebras. Prior to Skybox, Andy was the Senior Program Manager, business development lead and a Senior Scientist for Arete Associates. He holds a PhD in Geophysics and Space Physics from UCLA.
“Hardware Requirements for an AI Driven Healthcare Platform” (Traning/Inference)
Miguel Alvarado is CTO of Lumiata. He brings 23+ years of experience building teams, and large-scale platforms that yield value in consumer and B2B spaces. He has held senior engineering and analytics platform roles. Before Lumiata he was VP of Engineering at Vevo, before that, the he was the director of engineering for data and analytics at Verizon (focused on the OnCue platform), and led software efforts at OnHealth (acquired by WebMD) among other positions. He will discuss the hardware requirements for a next-generation platform that delivers efficient, secure AI, from accelerator requirements for training and inference to software stack needs.
“Reducing Hyperscale Requirements for Large-Scale Reinforcement Learning”
Enes Bilgin is a Sr. AI Engineer in Microsoft’s Autonomous Systems organization, with an expertise in Reinforcement Learning (RL). His work focuses on democratizing the AI and RL technologies for a broad array of industries as part of Microsoft’s Bonsai platform. To this end, he develops “Machine Teaching” methods to enable subject matter experts to transfer their know-how to AI models in an intuitive and effective manner. Prior to Microsoft, Enes worked at MathWorks, AMD and Amazon as a researcher and engineer. He was also an adjunct faculty at Texas State University and at the McCombs School of Business at the University of Texas at Austin.
“Architecture and Applications: The Groq View of AI’s Future”
Bill Leszinske brings 30 years of experience running core semiconductor business functions across CPUs, SoCs, chipsets, and memory/storage products. Bill has a strong track record of expanding and growing new businesses on a global basis across PC Client, server consumer electronics and storage market segments. Most recently, Bill was a Corporate Vice President at Intel and ran strategy, product planning, product marketing, and business development for Intel’s memory/storage business. This included creating new market opportunities and an ecosystem of optimized solutions for datacenter and client platforms.
“AI Chip Architecture Journey, Outcomes, and Future Directions”
Dr. Naveen G. Rao is a corporate vice president and general manager of the Artificial Intelligence Platforms Group at Intel Corporation. Rao’s organization is responsible for artificial intelligence product development and strategy, including the deployment of new hardware architectures built specifically to accelerate deep learning in the data center and at the network edge, as well as software to further accelerate AI solutions.
We will talk about Rao and Nervana’s beginnings with Nervana with emphasis on architectural choices and workload demands and come full circle to the present now that much of the ecosystem has shaken out. We will also discuss what architectures will succeed and why, given changes in the last few years with both frameworks/software and new use cases in AI at scale.
“Performance Benchmarking and Device Evaluation”
Peter Mattson is a staff engineer at Google Brain, where he originated and coordinates the multi-organization MLPerf benchmarking effort. Previously, he led the Programming Systems and Applications Group at NVIDIA Research, was VP of software infrastructure for Stream Processors Inc (SPI), and was a managing engineer at Reservoir Labs. He has authored more than a dozen technical papers as well as four patents. His research focuses on accelerating and understanding the behavior of machine learning systems by applying novel benchmarks and analysis tools. Peter holds a PhD and MS from Stanford University and a BS from the University of Washington.
“Scalability, Efficiency, and Architectural Balance: A Chat with SambaNova”
Rodrigo Liang is the CEO and co-founder of SambaNova Systems. Prior to SambaNova Systems, Rodrigo was senior vice-president at Oracle Corporation where he was responsible for SPARC Processor and ASIC development. During his tenure at Oracle, he led one of the industry’s largest engineering organizations in developing high-performance microprocessors and releasing 12 major SPARC processors and ASICs for enterprise servers over the past 15 years. SPARC processor performance achieved numerous world records, and it continues to be a performance leader for enterprise applications.
Before joining Oracle via the Sun acquisition in 2010, Rodrigo was vice-president at Sun Microsystems where he worked on the development of the Niagara line of multi-core processors. Rodrigo holds master’s and bachelor’s degrees in electrical engineering from Stanford University.
“When AI Can Do Physics: New Frontiers and Growth Projections”
Dr. Mohan is a researcher in machine learning and physics at Los Alamos National Laboratory. His focus is on the blending of AI and physics applications, including multi-dimensional applications in fluid dynamics and turbulence modeling. He also specializes in large-scale signal processing and analysis of non-stationary phenomena. Special interests include multidisciplinary work with focus on applied math and machine learning for fundamental problems in science and engineering.
This topic will explore how traditional physics computations will shift with AI and what it could mean for the growth in AI systems in engineering realms, including aerospace, defense, manufacturing, and beyond.
“Co-Design Challenges, Opportunities for Next Generation AI”
Michael Wong is the Vice President of Research and Development at Codeplay Software, a Scottish company that produces compilers, debuggers, runtimes, testing systems, and other specialized tools to aid software development for heterogeneous systems, accelerators and special purpose processor architectures, including GPUs and DSPs. He is now a member of the open consortium group known as Khronos, MISRA, and AUTOSAR and is Chair of the Khronos C++ Heterogeneous Programming language SYCL, used for GPU dispatch in native modern C++ (14/17), OpenCL, as well as guiding the research and development teams of ComputeSuite, ComputeAorta/ComputeCPP. For twenty years, he was the Senior Technical Strategy Architect for IBM compilers.
“AI Hardware Evaluation from a User Standpoint”
Debo Dutta is a Distinguished Engineer at Cisco where he leads a technology group at the intersection of algorithms, systems and machine learning. During his tenure at Cisco je has built an AI team to create consistent AI in enterprise and a massive-scale data ingestion product called Zeus from scratch. In addition to his role at Cisco he has also been a visitor at the Department of Management Sciences and Engineering, Stanford. He is one of the founding members of the MLPerf benchmarking effort.
“Refining Machine Reasoning and the Hardware/Systems Impact”
Jason Gauci is currently a software engineering lead in Facebook AI and developer of a scalable reinforcement learning platform called ReAgent, which is used internally to improve Facebook products and services. He was previously a software engineer at Apple, Google, and Lockheed Martin. The focus of his segment will be on the emerging area of machine reasoning and the impact of these workloads over more traditional reinforcement learning approaches on hardware efficiency and overall performance.
“Challenges/Opportunities for AI in Healthcare: Hardware, Software, Legacy Gaps”
Fernanda brings a strong accelerated high performance computing systems view to the table along with extensive large-scale workload perspectives that blend AI and traditional modeling and simulation. She is a currently GPU Developer Advocate for Bioinformatics at NVIDIA. She was previously in the HPC User Assistance Group as an HPC Programmer and Training Coordinator at the Oak Ridge Leadership Computing Facility. She participated in the CORAL project that selected Summit as the next supercomputer to replace Titan, was co-PI of Kokkos ECP project, served in OpenACC and OpenMP language standards and is the “inventor” of the GPU Hackathon series. Other interests include intersection of HPC and AI, facilitating data integration workflows, and productivity in scientific application development.
“AI Chip Design Should Begin With Code”
Founder and CEO of FØCAL, a seed stage company building performance engineering tools for AI.
Brian has an advanced degree in computational vision and has worked on dozens of computer vision architectures over a 15-year career, from whole-city LIDAR processing to realtime hand tracking for the surgery room. FØCAL is solving the biggest problem in AI engineering: matching hardware to software to maximize application performance.
“When Latency Defines Inference System Performance: An Extreme Use Case”
Dr. Ryan Coffee is a senior staff scientist (laser physics focus) at SLAC National Accelerator Laboratory and focused on nanosecond-level real-time AI with terabyte scale datasets at the X-ray free electron laser facility. This is an extreme use case to show how one of the most challenging use cases in AI inference with ultra-latency sensitive workloads and a need for efficient, high performance classification/analysis. It will pave the way for thinking about other latency-sensitive AI inference workloads and what hardware/software will be required.
“Why Reconfigurability Will Be Key for Inference at Scale”
Ramine Roane is VP of AI and Software at Xilinx, where he focuses on AI amd machine learning acceleration, IP libraries and the ecosystem from datacenter to the edge. His long career includes engineering roles at Abound Logic, Mentor Graphics, Synopsys, among others. His focus at the event will be on the roles FPGAs will play in both datacenter and edge inference with an emphasis on the flexibility of a reconfigurable architecture over a static ASIC.
“High Performance, Low Power: The FPGA Way to Consider Datacenter Inference”
Mr. Nijssen has over 20 years of experience in the FPGA and EDA industries in various technical and management positions. Mr. Nijssen joined Achronix as Chief Software Architect to manage the software development group, define the foundations and algorithms of the software system, and architect key aspects of the company’s FPGA architectures. In his current role, he is responsible for the productization of the company’s current products and R&D for new technologies for future products. Prior to Achronix, Mr. Nijssen was at Tabula where he was responsible for placement and timing analysis of a time-multiplexed FPGA technology. Prior to Tabula, he was one of the first engineers at Magma Design Automation, and held multiple leadership positions in charge of routing and placement, data models and customer deployment of Magma’s Blast Plan Pro hierarchy hierarchical virtual prototyping and floorplanning products for very large ASIC designs. Mr. Nijssen received his MSEE degree from Eindhoven University of Technology in The Netherlands, and after that followed its postgraduate program studying EDA for VLSI. He holds several patents related to P&R and asynchronous circuit technologies.
“Inference from the Datacenter to the Extreme Edge: Device/Use Cases Perspectives”
Following a long technical leadership career at companies including Intel (where he directed cloud/edge/AI strategy), AMD (where he headed product management for edge and datacenter infrastructure and networking), Ericsson (R&D lead for IP networking) Deepak Rana brings his experience to bear within Brevik Solutions, a consulting firm that analyzes the impact of AI, edge, cloud, and 5G technologies on the horizon. He will be focused on the panels that discuss the datacenter-to-edge transitions and what devices are at the fore.
Venture Capital/Market Perspectives
Kanu Gulati is a Principal at Khosla Ventures, focused on investments in data and ML-enabled enterprise applications. Kanu has over 10 years of operating experience as an engineer, scientist, and strategist. She developed advanced parallel CAD solutions and owned Intel’s multicore CAD algorithms research roadmap. Kanu has led early-stage investments in high performance computing (HPC), distributed systems and ML-enabled systems at Intel Capital and Zetta Venture Partners. Kanu was Employee #2 at MapD (hardware-accelerated data visualization) and has also held engineering roles at Nascentric (fast-circuit simulation tool, acquired by Cadence) and Atrenta (predictive analytics for design verification and optimization, acquired by Synopsys), among other startups.
Venture Capital/Market Perspectives
Vijay Reddy leads investments in Artificial Intelligence infrastructure, as well as applications of AI in several domains. Vijay is a board observer and/or has responsibility for Matroid, Sambanova, AEYE, DataRobot, Mesmer, Zumigo, Paperspace etc. Previously Vijay sourced and led Nervana engagement that ultimately lead to the acquisition to form the AI products group. Prior to joining Intel Capital, Vijay held leadership business development and product management positions in communications, software and wireless domains. He began his career as a researcher and an entrepreneur in wireless and software engineering. Vijay is a Kauffman Fellow, received his MBA from Chicago Booth and has a BS & MS in Electrical and Computer Engineering.
Venture Capital/Market Perspectives
Michael Stewart joined Applied Ventures in 2015 after working for more than 12 years in advanced technology development at Applied Materials and Intel. Michael’s most recent investment was with portfolio company Electroninks where he serves as a board observer. Prior to joining Applied Ventures, Michael Stewart was co-founder of JUSE LLC, a consumer electronics focused startup, and the inventor of the low cost CRAFT Cell for silicon photovoltaics. He has developed high volume manufacturing products for crystalline Si solar and semiconductor device fabrication. He is an expert in silicon materials science, surface chemistry, and post-CMOS electronics, as well as chemicals and materials for electronics and biotechnology applications. Michael Stewart holds a Ph.D. in Chemistry from Purdue University and an MBA from the University of California at Berkeley (Haas School of Business), and is an inventor on over 40 US and world patent applications.
Many, many more. Full list and schedule on its way over the coming week. Stay tuned.
Remember, this event sold out last year. Get your seat ahead of the curve! AGENDA BELOW.
The Next AI Platform 2020 Agenda
8:15 – 9:00 – Registration with coffee, light breakfast, networking.
9:00 – 9:05 – Welcome and introductory statements from Next Platform co-founders (Timothy Prickett Morgan and Nicole Hemsoth) and special guest interviewers, Karl Freund (Moor Insights), Paul Teich (Liftr Insights), David Kanter (Real World Tech/MLPerf).
9:05 – 9:30 – “Reconfigurability, Efficiency for Datacenter Inference” with Ramine Roane, Xilinx.
9:30 – 9:55 – “The Evolution of Inference Systems at Facebook” with Misha Smelyanskiy (Director, AI at Facebook).
9:55 – 10:15 – “Architectural Analysis: Batch Size, Efficiency, and Real-World Constraints.” With Bill Lesinske (Groq).
10:15 – 10:35 – “Scalable Efficiency and Architectural Balance for Inference.” With Rodrigo Liang (CEO, SambaNova).
10:35 – 10:55 – Networking Break
10:55 – 11:15 – “Architectural Considerations for Next Generation Deep Learning” with Bill Dally (Chief Scientist, NVIDIA).
11:15 – 11:35 – “Building Vast, Scalable Conversational AI Platforms: Pain Points in Hardware/Software.” With Chandra Khatri (Senior AI Research Scientist, Uber).
11:35 – 11:55 – “Stable Devices, Long Roadmaps: The FPGA Path to Datacenter Inference.” With Raymond Nijssen (Chief Architect, Achronix).
11:55 – 12:15 – “Going Big on Inference: Chip Deep Dive in Use Case Context” with Andy Hock (Cerebras). Interview with early user, Brian Spears from Lawrence Livermore National Laboratory.
12:15 – 1:15 – Networking Lunch
1:15 – 1:40 – “Evolving Requirements for AI Inference Hardware: A Market, Technology Perspective. With Lip-Bu Tan (CEO, Cadence/Walden International). Audience Q&A to follow.
1:40 – 2:05 – Inference Hardware Market Analysis/Perspectives. Panel featuring Lip-Bu Tan, Kanu Gulati (Khosla Ventures), Vijay Reddy (Intel Capital), Michael Stewart (Applied Ventures). Hosted by Karl Freund, Moor Insights & Strategy.
2:05 – 2:25 – “AI Training/Inference Performance Evaluation and Benchmarking.” With Peter Mattson (Google/ML Perf), Debo Dutta (Cisco/end user perspective), Maxim Naumov (Facebook). Hosted by David Kanter.
2:25 – 2:45 – “Using AI to Design AI Chips: EDA as Next Frontier for DL Growth.” With Elias Fallon (Engineering Director, Cadence Design Systems).
2:45 – 3:00 – “Hardware Requirements for Building Scalable AI Healthcare Platforms (Training/Inference).” With Miguel Alvarado (CTO, Lumiata).
3:00 – 3:20 – “Refining Machine Reasoning at Scale: The Hardware/Software Impact”. With Jason Gauci (Engineering Lead, Facebook).
3:20 – 3:40 – “I/O Considerations for Building Large-Scale AI Training and Inference Platforms”. With Scott Shadley (NGD Systems).
3:40 – 4:00 – BREAK
4:00 – 4:45 – AI Co-Design Considerations: Rapid-Fire Insights.
- “Co-Design Challenges, Opportunities for Next-Generation AI.” With Michael Wong, Codeplay.
- “AI Chip Design Should Begin with Code.” With Brian Rossa, F0cal.
- “Challenges for AI in Healthcare (HW/SW).” With Fernanda Foertter, Nvidia.
- Reducing Hyperscale Requirements for Large-Scale Reinforcement Learning. With Enes Bilgin, Microsoft Research.
4:45 – 5:15 – “When Latency Defines Inference System Performance: An Extreme Use Case.” With Ryan Coffee, SLAC National Accelerator Laboratory.
5:15 – 5:25 – “Considerations for Building Large-Scale AI Systems: From Acceleration to Data Movement and Beyond.” With Dylan Boday, IBM.
5:25 – 5:45 – “Emerging Technology Requirements at Swisscom: AI From Datacenter to the Edge.” With Cornelius Heckrott, Swisscom. (Hosted by Deepak Rana).
5:45 – 5:50 – Closing Remarks from The Next Platform team.
6:00 – 7:00 – Happy Hour on The Glasshouse Patio (weather permitting). Appetizers, beer, wine, soda, etc.