Veteran IT Journalist, Jeffrey Burt, Joins The Next Platform as Senior Editor

We are thrilled to announce the full-time addition of veteran IT journalist, Jeffrey Burt to The Next Platform ranks.

Jeffrey Burt has been a journalist for more than 30 years, with the last 16-plus year writing about the IT industry. During his long tenure with eWeek, he covered a broad range of subjects, from processors and IT infrastructure to collaboration, PCs, AI and autonomous vehicles.

He’s written about FPGAs, supercomputers, hyperconverged infrastructure and SDN, cloud computing, deep learning and exascale computing. Regular readers here will recognize that his expertise in these areas fits in directly with our coverage scope.

We asked Jeffrey Burt to offer some perspective about his scope of coverage for 2017 and beyond…

***

Datacenters today bear little in resemblance to their forebearers of almost 17 years ago, when I first entered the IT journalism world.

In 2000, they were heavily siloed environments, with servers running the bulky enterprise applications held in storage systems and connected via networks that were essentially the plumbing running through the environments. The performance of these servers relied heavily on the speed of processors, with new generations of chips from the likes of Intel, AMD, IBM and Sun Microsystems essentially defined by their increasing frequencies.

The client-server environment was a static one, with infrastructure, PCs, applications and data housed behind the corporate firewall. It held its own complexities, but in many ways data centers were much simpler, easier to define, secure and manage.

It didn’t stay that way for long. Few things do in tech ecosystems as new pressures force changes in the paradigm. Servers continued to get smaller and data centers more dense, and as processor frequencies increased, concerns over heat generation and power consumption—and the growing associated costs—moved to the forefront. Chip makers were forced to find new ways to increase the performance of the silicon while making them more energy efficient, leading to the rise of multi-core processors that could run high numbers of instruction threads, and later to the use of such accelerators as GPUs and FPGAs to help meet the growing demand for parallel computing.

It also has helped give hope to non-x86 chipmakers—such as ARM and IBM with its OpenPower effort—that are looking to chip away at Intel’s dominant position in the data center chip space by offering low-power processor alternatives.

Servers became smaller—with rack and blade systems—and then virtualized, enabling companies to run more workloads without increasing the number of servers they needed to buy or adding data center space. Virtualization has since moved to the storage arena and, most recently, into the networking realm, with the emergence of such technologies as software-defined networking (SDN) and network-functions virtualization (NFV).

The virtualization of hardware has given fuel to the move toward software-defined data centers, highly scalable, flexible and agile environments where the applications determine the level of data center resources they need, and draw those compute, storage and networking resources from a pool. Once the resources are no longer needed, they are returned to the pool to be used by other workloads.

The rise of cloud and mobile computing and the Internet of Things continues to roil the data center environment. Hyperscale players like Google, eBay, Facebook and Alibaba operate massive data centers to support the huge numbers of workloads they need to run, and have become major consumers of data center technologies. They demand fast, efficient and cheap systems, and their dominance in the market now have system and chip vendors bending their way, creating products designed to meet those demands. Where once system OEMs rolled out new products that end users would have to adapt for use in their data centers, now it’s more often the customers that are making the system makers jump to meet their needs. The needs of the hyperscale players have also led to the growth of the open-hardware movement, an effort built along the lines of what Linux did for software a decade earlier.

In a mobile- and cloud-centric world, corporate applications no longer are housed in data centers behind firewalls. They can be accessed from anywhere and on any devices, putting pressure on enterprises to find ways to not only manage them but also to secure them. And as the number of devices and applications grows, so does the amount of data being generated by them—zettabytes of data that need to be analyzed in real-time to enable companies to generate relevant and usable business decisions.

Now AI and deep learning are entering in the picture, bringing greater intelligence to the devices, systems, sensors, applications and the data being generated by them. More computing—more processing power, storage and analytics—will happen at the edge of the network, putting pressure on infrastructure vendors to offer technologies that stretch from the edge back to the core data centers and into the cloud.

All that is the source of my excitement at joining the folks at The Next Platform. The site is focused on the area of the industry where the innovation and development is happening, talking not only about what is happening and the direction things are going, but what all this means to the industry, its vendors and its end users. What’s being talked about here today will have significant impacts tomorrow and beyond. I’m looking forward to adding to the discussions in the months and years to come.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.