Last week we brought you The Next Database Platform live event and now we are providing most sessions from the full recording below. Use the timestamps at the bottom of the article to jump to sessions/interviews of particular interest and to skip around breaks and bumper material. We’ll be providing more in-depth analysis from select sessions over the next couple of weeks as well. Thanks again for all who attended last week; great conversations all around. Thanks as well to our sponsors (see below) for making this event free, open, and possible.
Timestamps for Select Event Interviews
8:55 – Kickoff/Introduction with host, Timothy Prickett Morgan, co-founder, The Next Platform
11:35 – “Building The World’s Largest Known Healthcare Graph” with Edward Sverdlin, VP, Advanced Technology/R&D at UnitedHealth Group.
36:45 – “Increased Scale and Database System Impacts, Choices” with Keren Bartal, Director, Data Engineering at Taboola
55:25 – “Document Databases in the Oil and Gas Industry” with Jim Wang, Software Engineering Manager, Corva (oil and gas analytics end user).
1:15:05 – “The Cloud Native Database Inspired By Borg And YouTube” With Sugu Sougoumarane, founder and CTO, PlanetScale
1:30:53 – “What Developers Really Want: Document Databases” With Asya Kamsky, Principal Engineer, MongoDB
1:53:11 – “The Inevitability Of Graph Databases” with Sherman Ye, founder and CEO, VESoft (Nebula Graph)
2:14:55 – “The Need For Accelerated Databases For Analytics And Visualization” with Todd Mostak, founder and CEO, OmniSci (Hardware accelerated databases)
2:30:24 – “The Third Wave Of Federated Databases” with David Simmen, Co-Founder and CTO, Ahana
2:44:18 – “Distributed SQL For All” with Shane Johnson, MariaDB
3:10:00 – “Why Presto Is The Next Database Platform For Analytics” with Dipti Borkar
3:29:07 – “Transforming Enterprise Data Architectures For Cloud And Edge Applications” with Srini Srinivasan, founder and chief products officer, Aerospike
3:46:41 – “The Great Digital Pivot – Shifting Business And Tech Priorities” with Yu Xu, founder and CEO, TigerGraph
4:10:24 – “Dealing With The Unpredictable Using Cloud Native Databases” with Penny Avril, Director, Product Management for Databases, Google Cloud
About The Next Database Platform
The IT world would have been a far simpler and easier place if the relational databases that were commercialized in the 1980s and expanded in the 1990s and 2000s could have absorbed new data types quickly and efficiently while at the same time scaling both up in larger machines and scaling out across multiple machines. To their credit, the remaining major relational databases used by enterprises – Oracle, IBM DB2, Microsoft SQL, and MySQL and PostgreSQL in their various guises – have done a pretty good job of absorbing object and XML and JSON document formats as well as adding columnar data store and in-memory options.
But invariably, these relational databases come to a breaking point where they can’t get answers fast enough, they can’t scale across enough compute and storage to hold extremely large database, or both. And they are always – always – very expensive. The cost of the database software can rival that of the compute, storage, and networking that underpins the database, and then there are always supplemental costs for add-ons such as caching and messaging interfaces that attempt to speed things up.
And so, as they types of applications and the types of data that are being stored is increasing, and demands on lowering latency and increasing scale are relentless, it is no surprise then that there is a true Cambrian explosion in the database industry over the past several years.
We not only have NoSQL and NewSQL databases that emerged a decade ago because of the limits of legacy relational databases, but we have a whole new crop of databases that store information in time series, graph, object, document, and relational formats, with varying degrees of structure and schema, and that increasingly allow for the lingua franca of database querying – Structured Query Language, or SQL, outlined by IBM database pioneer Edgar Codd in his seminal 1970 paper, A Relational Model of Data for Large Shared Data Banks – to be used in its full glory or pretty close to it.
Because performance matters, many of these databases can run in-memory and a number of them can be accelerated by flash or 3D XPoint storage or by adjunct compute engines such as GPUs or FPGAs. And more than a few of them have automagic scaling and data partitioning (important for data sovereignty reasons) across vast geographical instances. And, here is the important part, many of these new database alternatives are considerably cheaper to acquire than those legacy relational databases. This is equivalent to the X86 processor taking on mainframe, proprietary minicomputer, and RISC/Unix servers in the early 1990s, which obviously had a huge impact on the modern datacenter.
But rather than creating a single substrate of general purpose compute that drove up volumes and drove down prices as the X86 engine did in the datacenter, this database revolution is spawning variety in its splendor and spurring competition that is driving down prices. It doesn’t hurt that to get early adopters, these companies have to offer very attractive pricing for a given amount of structured or semi-structured data to be stored compared to those legacy databases because the risk of changing databases is so large that the reward has to be great. For those who need lower latency or higher scale than a legacy relational database can provide, they can be charged a premium – and will pay it, too, because they need to solve their latency and scale problems.
There is a new era in databases, and we are thrilled to be exploring it with you.
Be the first to comment