Unleashing An Open Source Torrent On CPUs And AI Engines

When you combine the forces of open source and the wide and deep semiconductor experience of legendary chip architect Jim Keller, something interesting is bound to happen. And that is precisely the plan with AI startup and now CPU maker Tenstorrent.

Tenstorrent was founded in 2016 by Ljubisa Bajic, Milos Trajkovic, and Ivan Hamer and is headquartered in Toronto. Keller was an angel investor and an advisor to the company from the get-go, and was brought in as chief technology officer in January 2021 after a stint at Intel’s server business, where he cleaned up some architectural and process messes as he did under a previous job at AMD. In January of this year, Keller was tapped to replace Bajic as chief executive officer, and the company is today announcing that it will bring in somewhere between $120 million and $150 million in its Series D funding, with Hyundai Motor Group and Samsung Catalyst Fund leading the round and with prior investors Fidelity Ventures, Eclipse Ventures, Epiq Capital, Maverick Capital, and others kicking in dough. To date, that will being the investment kitty to somewhere north of $384.5 million and will probably boost its valuation above $1.4 billion.

All that money is interesting, and necessary to pay for the substantial amount of engineering work that the Tenstorrent team needs to do to create a line of commercial-grade RISC-V server processors and AI accelerators to match them and, more importantly, to take on the hegemony of the Nvidia GPU in AI training. It is going to take money – and maybe a lot more money, and maybe not – to help companies cut the costs of AI training. What we do know is that Keller thinks he has just the team to do it, and we had a chat with him about the Tenstorrent mission, one that we have been looking forward to.

We will do a deep dive on the Tenstorent CPU and AI engine architectures in a follow-up.

Timothy Prickett Morgan: Let’s cut right to the chase scene. I have been dying to ask you this question because your answer matters. Why the hell do we need another AI accelerator?

Jim Keller: Well, the world abhors monopoly.

TPM: Yeah, but we got we have got so many different companies already in the game. None of it has worked to my satisfaction. It’s not like the Groq guys took the TPU idea, commercialized it, we’re done. It’s not like MapReduce and Yahoo Hadoop. Nirvana Systems and Habana Labs both had what I think were good architectures, and Intel has not had huge success with either. Graphcore and SambaNova are reasonable, Cerebras has waferscale and that is interesting. Esperanto is in there, too, with RISC-V. And everybody, as far as I can see, has a billion dollar problem to get to the next level. I know RISC-V is important, that it is the Linux of hardware and we’ve been waiting a long time for that moment. Using RISC-V to build an accelerator is the easy part of making an architectural choice.

What is it that Tenstorrent is doing can do that is different, better? I don’t expect you to spill all the architectural beans today, but what is driving you, and why?

Jim Keller: There are a bunch of things. First, whenever there’s a big hype cycle, more people get investments than are properly supportable by the industry. Ljubisa Bajic, one of the co-founders of Tenstorrent, and I had long chats because when SambaNova and Cerebras had very sky high valuations. So they raised a lot of money, and they started spending a lot of money, and we did the opposite. We had a $1 billion valuation post funding round last time and we were offered more money at higher valuations. And then we thought: Then what? Down rounds like everybody else? That’s really hard on your company. Like it kind of put your both your employees and your investors in a bad spot. So we raised less money at a lower valuation because we’re are in it for the long term.

Now, we have analyzed what Cerebras, Graphcore, SambaNova, Groq, and the others are doing, and they all have something interesting or they wouldn’t get funded.

You can say, well, we’re not going to make those mistakes and we have something to bring to the table.

I don’t think GPUs are the be all and end all of how to run AI programs. Everybody who describes an AI program, they describe a graph, and the graph needs to be lowered with interesting software transformations and map that to the hardware. That turns out to be a lot harder than is obvious for a bunch of reasons. But we feel like we’re actually making real progress on that. So we can make an AI computer that’s performant, and that works well and is scalable. We’re getting there.

The other is thing is that we started building a RISC-V – and we at Tenstorrernt we had long chats about this – and we think the future is going to be mostly AI. There is going to be interaction between general purpose CPUs and AI processors, and that program and software stack, and they are going to be on the same chip. And then there’s going to be lots of innovation in that space. And I called my good friends at Arm and said that we want to license it and it was a too expensive and they didn’t want to modify it. So we decided to build our own RISC-V processor. And we raised money partly on the last round on that thesis that RISC-V is interesting.

When we told customers about this, we were somewhat surprised – positively surprised – that people wanted to license the RISC-V processor standalone. And then we also found that some people who were interested in RISC-V are also interested in our AI intellectual property. When you look at the business model of Nvidia, AMD, Habana, and so on, they’re not licensing their IP to anybody. So people have come to us and they tell us that if we can prove our CPU or AI accelerator work – and the proof is silicon that runs – then they are interested in licensing the IP, both CPU and AI accelerator, to go build their own products.

The cool thing about building your own product is that you can own and control it and not pay 60 percent or 80 percent gross margin to someone else. So when people tell us Nvidia has already won and ask why Tenstorrent would compete, it is because whenever there’s a monopoly with really high margins that creates business opportunities.

TPM: This is a similar argument going on right now between InfiniBand, which is controlled by Nvidia, and the Ultra Ethernet Consortium. People keep telling me that Ethernet has been trying to kill InfiniBand since it was born. And I remind them that they are not competing with InfiniBand because it is dying, For the first time in two and a half decades, it is thriving. Same thing with Intel CPUs in the datacenter. There was no way 50 percent operating income for Data Center Group was going to hold over the long term. That kind of profit doesn’t just attract competition, it fuels it.

Jim Keller: In the real world, the actual gross margin is always somewhere in between. If you go much under 10 percent, you are going to really struggle to make any money and if you go over 50 percent you are going to invite competition.

Then there is the open source angle to all of this. The cool thing about open source is people can contribute. And then they can also have an opportunity to own it, or take a copy of it and do interesting stuff. Hardware is expensive to generate, taping out stuff is hard. But there are quite a few people building their own chips and they want to go do stuff.

Here is my thesis: We are going to start to generate more and more code with AI, and then the AI programs are an interaction between general purpose computing and AI computing, this is going to create, like a whole new wave of innovation. And AI has been fairly unique in that it has been amazingly open with models and frameworks – and then it’s running on very proprietary hardware.

TPM: A lot of the frameworks and models are not open source, and even those that are sometimes have commercial restrictions, like LLaMA, or have been closed off, like OpenAI in the transition from GTP-3 and GPT-3.5 to GPT-4.

Jim Keller: Yeah, there has been some very uneven terrain, I agree.

TPM: But I agree, there has been an element of openness to all of this. I would say something similar to relational databases decades ago.

So here is the question about open hardware: When you create a RISC-V processor, do you have to give it all back? What’s the licensing model?

Jim Keller: Here is the line that we are walking. RISC-V is an open source architecture, we have people contributing to that architecture definition. The reference model is open source, the guy who wrote the Whisper instruction set simulator works for us. We created a vector unit and contributed that. We built an RTL version of a vector unit and then open sourced that. We talked to a bunch of students and they said the infrastructure is good, but we need more test infrastructure. So we’re working on open sourcing our RTL verification infrastructure.

The RISC-V now owns the university research for computer architecture. It’s the de facto, default thing. Our AI processor has a RISC-V engine inside of it, and we’ve been trying to figure out how do we open source a RISC-V AI processor. Students want to be able to do experiments; they want to be able to download something, simulate it, make modifications, try and change it. And so we have a software stack on our engine, which we’re cleaning up so we can open source it, which we’re going to do this year. And then our hardware implementation has too many, let’s say, dirty bits in the hardware – you know, proprietary things. And we’re trying to figure out how to build an abstract version, which is a pretty clean RISC-V AI processor. And I would like to open source that because the cool thing about open source is once people start doing it and contribute to it, it grows. Open source is a one way street in this way: When people went to Linux, nobody went back to Unix.

I think we’re like 1 percent to maybe 5 percent of the way into the AI journey. I think there’s going to be so many experiments going on and open source is an opportunity for people to contribute. Just imagine, going back five years, if there was an open source AI engine. Instead of doing fifty random different things that didn’t work, imagine if they were doing their own random versions of an open source thing, but contributing back.

TPM: And that open source thing worked. Like GPT-3, for instance.

Jim Keller: Well, or that the net of all those people generated a really credible alternative to Nvidia that worked.

I’ve talked to lots of AI companies and when I was at Tesla, I saw lots of engines. And twenty companies would have 50 people working for two years building exactly the same thing the other nineteen companies all did. If that had been open source development, that would have moved a lot faster.

Some open source stuff, like PyTorch, has been open for a while, but the way the project ran wasn’t great, but PyTorch 2.0 fixed that. TVM is open source – we use that and it’s actually quite good. We will see what happens with Chris Lattner’s company, Modular AI, and the Mojo programming language. He says he’s going to open source Mojo, which does additional software compiler transformations. But we don’t have a clean target underneath that that drives some of the stuff. And so I was just talking to my guys today about how do we get our reference model cleaned up and make this a good open source AI engine reference model that people can add value to?

And once again, I think we’re in the short, you know, the early innings on how AI hardware is going to be built.

TPM: What’s your revenue model? You are going to build and sell things and you are going to license things, I assume?

Jim Keller: We build hardware. The initial idea was we’re going to build this great hardware. Last year, we got our first ten models working. We thought we had a path to maybe 30 models to 50 models, and we kind of stalled out. So we decided to refactor the code – we did two major rewrites of our software stack. And we are now onboarding some customers on the hardware we built. We did an announcement with LG, we have several more AI companies coming along the pipe. Then we built this RISC-V CPU, which is very high end. SiFive is a fine company, but their their projects are kind of in the middle, Ventana’s a little higher than that. And people kept telling us: We would like a very high-end CPU. So we’re building a very high-end CPU, and we are under discussions with ten organizations to license that.

We are a design company. We design a CPU, we design an AI engine, we design an AI software stack.

So whether it’s soft IP, a hard IP chiplet, or a complete chip, those are implementations. We were flexible on that front. For instance, on the CPU, we are going to license it multiple times before we tape out our own chiplet. We are talking to like a half a dozen companies who want to do like custom memory chiplets or NPU accelerators. I think for our next generation, both CPU and AI, we are going to build CPU and AI chiplets. But then other people will do other chiplets. And then we’ll put them together into systems.

TPM: They’re going to do the assembly and the systems, and all that you’re not interested in is literally making a package that you sell to Hewlett Packard, Dell, or whoever?

Jim Keller: We’ll see what happens. The weird thing is, you really have to build it the show it. People say, I would really like to build a billion of those, so show me 1,000. So we build a small cloud, we have 1000 of our AI chips in the cloud. When we first started, we were just going to put the chips in servers and give people access. It’s really easy. There’s Linux running, or you can have bare metal.

TPM: That was my next question. If you look at companies like Cerebras and SambaNova, they are really becoming cloud vendors or suppliers to specific cloud vendors looking for a niche and also a way to get AI done cheaper and easier than with GPUs from Nvidia. By my math, it looks like you need around $1 billion to train a next-gen AI model, and that money has to come from somewhere, or a way has to be found to do it cheaper.

Jim Keller: I’d say about half the AI software startups don’t even know you can buy computers. We talk to them, we get them interested, and then they ask if they can try it out on the cloud. On the flip side, as companies scale up, they start realizing that they are paying 3X or more to run AI on the clouds than in their own datacenters – it depends on what you are buying and what your amortization time is. It’s really expensive.

If we design a CPU and an AI accelerator that’s compelling, there are channels to the market: IP, chiplets, chips, systems, and cloud. It looks like to prove what you’re doing, you have to make chips, systems, and clouds to give people access to it. And then the key point is, can you build a business, build an engineering team, raise money, and generate revenue. Our investors mostly say we don’t need you to make a billion dollars, we need to sell tens of millions of dollars worth of stuff to show signal that the customers will pay for it – that it works and that they want it. And that’s the mission we’re on right now.

We’re on the journey. I told somebody recently, when things don’t work, you have a science project; when things work, you have a spreadsheet problem. A spreadsheet is like this. Our current chips are in Globalfoundries 12 nanometer. And somebody says, how fast would it be if you ported it to 3 nanometers. There’s no rocket science to it. You know performance of GF12 and TSMC 5N, 5N, and 3N, and you just spreadsheet it out and then ask, “Is that a compelling product?”

Did I think we were going to have to do all these things when I started? No, not really. But then again, is it surprising that as a company selling full function computers that you have to do everything? So I used to joke that when you build a product, there’s the 80/20 rule, which is 20 percent of the effort is the 80 percent of the results. And then there’s the 100 percent rule, which is you have to do 100 percent of the things that customers need to be successful.

TPM: In the modern era, companies don’t have to buy one of everything interesting to see what really works and what doesn’t. So that’s an improvement. But no matter the deployment model, the costs for AI training are very high.

Jim Keller: This is always true during a boom cycle. I have talked to multiple VCs that say they are raising $50 million for an AI software startup and $40 million of that will end up going to Nvidia. When you’re in a rush, that’s a good answer. And then you think, well, I could get the same performance from Tenstorrent for $10 million, but you have to do a lot more work. And then talk about the time value of money, and then they spend the money now. But when the hype cycle starts to wear off, and people start asking why are they spending this much money on stuff? Like, what’s a credible alternatives? How do we lower the cost?

TPM: You will be standing there. How much lower can you make AI training costs with Tenstorrent chips?

Jim Keller: Our target is 5X to 10X cheaper.

TPM: To be precise, 5X to 10X cheaper than GPU systems of similar performance.

Jim Keller: Yeah. There’s some technical reasons for that. We use significantly less memory bandwidth because we have a graph compiler and our architecture is more of a dataflow machine than are GPUs, so we can send data from one processing element to another. As soon as you use an HBM silicon interposer, it gets very expensive. One of the things that’s crazy right now is if you look at Nvidia’s markup on an H100 SXM5, most of the silicon content is from Samsung or SK Hynix. There is more value in the HBM DRAMs than in the Nvidia GPU silicon. And furthermore, if you want to build your own product, is Nvidia going to sell you an IP block or customize it for you? No.

TPM: Do you have any desire to do networking, or are you just focused on compute? I am hoping you give the right answer here.  

Jim Keller: We have network ports on our chips, so we can hook them together in large arrays without going through somebody else’s switch. This is one of those reasons why, technically, our approach is cheaper than Nvidia’s approach. Nvidia likes selling high margin InfiniBand switches. We build a box where we don’t need that.

In their current situation, Nvidia is a big margin generator. In our situation, we ask why would you put an InfiniBand switch between a couple hundred chips? Why not just have the chips talk to each other directly? I’ve talked to a couple of really cool storage startups with really interesting products, and then they tell me their mission is to have really high margins. I tell them our mission is to really drive the cost of this down. You have to pick your mission.

So if somebody comes to me and they want to license rights to our technology so they can modify it and build their own products, I think that’s a great idea because I think innovation is going to accelerate when more people are able to take something solid, and then work on it. And that’s partly because I have confidence that we’ll learn from whoever we partner with. We have some really good designers and we’re thinking hard about our next generation.

TPM: So how do you shoot the between being Arm before SoftBank acquired it and after SoftBank did and Nvidia was chasing it? You want to be Arm, not twisted Arm.

Jim Keller: At the moment, we are a venture funded company, and our investors want our technology to work and want positive signal on our ability to build and sell product, which is what we’re focused on.

We just raised a round with Samsung and Hyundai for two different reasons.

Samsung knows me pretty well because I’ve done products with them at Digital Equipment, Apple, Tesla, and Intel – and they were all successful. They are interested in server silicon, in autonomous driving silicon, and AI silicon. So with RISC-V will be a generator of revenue, and they want to invest in that.

Tenstorrent celebrating its Series D round with Hyundai Motor Group

Hyundai came out of the talks we are having with every automotive company on the planet, and they all feel the industry needs to do something about the hold Mobileye and Nvidia have on them. They would like to have options, and many of the car makers would like to own their own solutions. Hyundai got very interested in us and said they wanted to invest, and they have become the number three automaker and they just bought Boston Dynamics, and they partner with Aptiv through Motional. They are making money building cars and other products, and they are very forward leaning.

In an environment where there’s going to be rapid change, you build a team around great people, and then you raise money. We’re raising over $100 million on an up round, in a tough market, and to be honest, it took a lot longer to close than it did last time, that’s for sure. I like working with Samsung, I had a lot of success with them. They’re a good, solid fab. They have a big IP portfolio, and we’re going to help them build a premium product and bring it to market. The Hyundai guys are great, and I have talked to a bunch of people. They’re super smart. They want to build chips, they want to go fast. There’s lots of opportunities.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

14 Comments

  1. Wow — super-engaging interview (especially during an SC workcation!)! In terms of staking out one’s turf, I’d say taking ownership (or leadership, and some level of control) of RISC-V’s 64-bit ISA could be just the ticket (instructions ending in 0111111 in v2.2 of the spec). Intel’s AVX10 (CISC) and IBM’s POWER10 (RISC) both leverage 64-bit encondings of instructions and one may suspect an advantageous reason behind this (eg. Paul’s (Berry) idea of merging quantized GPU-style tensor ops into the CPU’s ISA). Plus, with 64-bit data processing, 64-bit instructions are quite natural, and an organically 2W3R ISA should emerge as more obvious and elegant than it ever could under 32-bit constraints.

    As a side note, the French Manicamp cheese (from Quierzy, Aisne) is outstanding if you can import it — kind of like Virginia’s Grayson, or Germany’s Limburger, but with France’s superior (flavorwise) microbiology (a definite must)!

    48-bit instructions (as suggested elsewhere for RISC-V) is quite ridiculous (not a power of 2), but 64-bit makes it possible to specify either a single large immediate, or several BF16’s, in a single instruction, which should help tackle memory access issues by specifying constants directly in code. A great avenue of leadership for upcoming Tenstorrent activity!

    • Indeed, chapter 20 of “The RISC-V Instruction Set Manual: Volume I, Unprivileged Architecture” (12/2022, 05/2023), entitled: “J” Standard Extension for Dynamically Translated Languages (Version 0.0), is a must read! It was Chaper 14 in the 2017 Manual, and 18 in the 2020 Manual, and has remained very stable. It deals with the most widespread of application languages (eg. Python):

      “[…] popular languages […] implemented via dynamic translation, including Java and Javascript [that] can benefit from […] dynamic checks and garbage collection”.

      One logically infers from that detailed spec, which very concisely speaks volumes, that the need for a 2W3R design will self-prompt engineers to evolve a consistent 64-bit-wide instruction set that simultaneously supports dynamic languages and HPC/AI vector ops. The performer to watch-out for will be RISC-VJ (RISC-VI++) — like a James Brown RISC Machine … d^8

  2. I’d love to see that Tenstorrent Ascalon(Very Wide Order Superscalar) RISC-V CPU core in a Laptop SOC with Imagination Technologies’ PowerVR Photon GPU Micro-Architecture for the iGPU. And Laptops and Gaming Handhelds(Especially) are a growing market segment and even gaming is making use of AI for Up-scaling and Frame Interpolation. And the Handheld Gaming market has the growth potential there if there can be longer battery life while not sacrificing battery life or performance. So with some RISC-V ISA based Handheld Gaming device that, unlike an ARM ISA based devices that has to implement the full ARM ISA even if that are parts of the ARM ISA may not be needed in some gaming handheld, can be RISC-V based and only the needed ISA elements to Run Steam OS/Other OSs and maybe some gaming focused RISC-V custom ISA extensions for improved gaming workload performance in a manner that’s not allowed by ARM Holdings currently. Imagine some RISC-V ISA extensions that facilitate the running of emulators more efficiently for games emulation workloads on Handheld gaming devices and something that could never be done under any ARM Holdings’ restrictive ISA licensing agreement.

  3. Some are taking ‘what is monopoly’ out of context from this report and I will post the same reply here;

    Monopoly is more than 60% profits. Granted someone will want to fly under that umbrella but a natural monopoly is not illegal nor is a cartel, the Nvidia ecosystem is cartelized and the question for good or bad? Test of an abusive monopoly, or cartel which is basically an associate membership club, is seen in its closed system or quasi closed system frameworks. The key question is commerce being restrained by the associate network or in the case of a monopoly by its enabler and/or dealing group gets to the cartel part of monopoly.

    Antitrust has little to do with competition and more to do with restraining / limiting trade (don’t get me into export controls) it’s all about open commerce. The key question is said monopolist or cartel or combination of both, restraining a demanded product, a competitive offering to monopoly and/or cartel products and/or services, that is offered competitively on superior terms? And the evidence of that is typically found in ‘general systems’, how is the system limiting and the proof is found in explicit contracts. Tacit cartelization also exists, seen in systems, proven on repeating occurrences, patterns of predation.

    Subject “superior terms” one of those terms is the ability to supply that has always been a stickler in the AMD v Intel confrontation. Because if AMD cannot supply a demanded product and one of the purchase terms is volume (supply) and AMD cannot meet that term presents a deal breaker. In turn end product buyers continue to be fed Intel kibble / dog food are both Intel sales terms, for product that is “just good enough’ and buyers buy because there is no other choice, then there is no actionable restraint. The same can be said for Nvidia.

    Noteworthy Intel antitrust case matters have been and continue to be resolved on explicit contracts (dumb dumbs to be caught that way), false certifications whether Intel Corp. USDOJ antitrust compliance agreement, GSA Title 48 procurement regulations, or financial reporting, on production (micro) economics primarily cost < price sales and on general system observations. BUT for Intel the explicit contracts and false certs are the ultimate proofs of the 'system crime'. The catch is documenting the monopolist in a sophistry where they can provide no system counter proofs and Intel cannot vis-a-vis; "not a monopoly and no consumer harms".

    Now apply these standards and inquiries to Nvidia . . . I'll take all the help I can get.

    Mike Bruzzone, Camp Marketing
    FTC Docket 9341 consent order monitor; AMD, Intel, Nvidia

    • I didn’t see anyone taking it out of context. Monopoly is more than 80 percent market share, and the ability to control price to a certain extent. The X86 platform as a group certainly does that, but Arm has its share and RISC-V will get its share.

  4. Tim well said 80% share. However, some media on access and enduring audience (my take over the long term) will not address your point specifically leaving counter point to others and some in the general audience have misinterpreted 60% profit for 80% share and I will pass your point on with attribution. And this is my fourth attempt at this response to you on Next Platform AI rejecting it as a duplicate. Seeking Alpha has also implemented new screening tools that on parameters don’t allow in real time counter point and limit. mb

    • Mike, love you man, but you gotta give me a chance to moderate comments. This is not automated, and it never will be.

      • Acknowledge “never automated” you check when a comment is said by the system a duplicate even on no resemblance to the original ok a flag calling attention for your consideration.

        My audit shows some believe the bots. If a system work said or concluded the system work must be right on the vastness of stored knowledge, said knowledge? Individual entities or group think or save your insurance? Over time continuing evolutionary improvements? Down what track? Independent and or mutually dependent observers also note corporate-esk of sorts seems to end in the entity observed not responding or engaged in diversions that stretch things out to make oversight observations go away.

        Obviously this is not about large language models, I think, at least not yet and recommendation systems seem to work on my derived preferences.

        Others believe triangulation to formulate reasoning from said archive of knowledge in any number of ways can be flawed and that I take this is your point.

        mb

  5. I hold my breath on Tenstorrent. I’ve been waiting for their initial product launch for two years nothing has happened so far. Obviously they are changing direction now again. I’ll say wait and see so far their execution has been rather lacklustre and it does not matter if it has a name like Jim Keller behind it or not in the end you have to deliver.

    • I agree, this was sort of a stunning interview. Tenstorrent has been at it since 2016. And Keller has been there since 2020. But he’s been in the eye of AI chip development for a lot longer than at (at Intel and Tesla). He’s got what 3 or 4 AI chips under his belt by now.

      So it’s hard to get over him eluding to or using the term “signal”. Like proof of life or something. “A positive signal” on our ability to build and sell product.

      I get that what they’re shooting for is hard. I just thought they’d be further along than this sort of proof of life at this point. It sounds tenuous. And then there is always the software content which I’m not sure happens until they have a viable product. It just points to the idea Nvidia is going to be in the driver’s seat for some time to come.

      • In my view, Keller’s Tenstorrent approach is more conceptual than the more common GPU-repurposing avenue (or prior-product pragmatism?); to wit, the key nexus of their contribution (possibly slightly buried in the volubility of the exchange) seems to be:

        “the graph needs to be lowered with interesting software transformations and map that to the hardware […] we have a graph compiler and our architecture is more of a dataflow machine”

        Amen to this sort of innovation (in my opinion).

  6. I think Keller is without any doubt a brilliant CPU-designer (will be interesting what the very-high-end RISC-V CPU from them will be) and very enthusiastic about tech, but as the saying goes “the eyes of love are blind”.
    One statement of Keller does not draw good security foresights on the horizon, but it is just as it is, so no criticism on Mr. Keller as he just says, what is the case and will be:
    “Here is my thesis: We are going to start to generate more and more code with AI, and then the AI programs are an interaction between general purpose computing and AI computing, this is going to create, like a whole new wave of innovation.”
    ->Code itself already get’s more and more complex (something against which the KISS approach fights against – that’s why I like e.g. OpenBSD), so that we already have problems to keep it secure, as complexity is the enemy of security. We just don’t get it anymore.
    AI does give outputs nobody can predict, otherwise we would not need, or let’s better say, we would not use AI (as I think we do not need it, as e.g. still somebody has to check that code from the AI…and it does already mean a lot amount of work to check and try to understand somebody elses code, so what will this be if one has to check the code of a machine being sick with multiple-personality-disorder, which an AI somehow is). So we already just don’t get our own complex code and want to extend this into unknown directions?
    …not to mention the hardware, which we also don’t get – last proof: downfall and inception.
    ->So we get code, who nobody can predict, “written” by code (the AI) which nobody really oversees completely running on hardware we don’t…
    ->I would say, this is “opening Pandoras Box within Pandoras box”, or “Pandoras Matryochka”.
    …but that’s the way hybris always goes, when non-eternal things are made to a god besides the real God, who e.g. showed up in the world 2k years ago and still talks through his word:
    Romans 1:22
    Seeming to be wise, they were in fact foolish,
    Romans 1:23
    And by them the glory of the eternal God was changed and made into the image of man who is not eternal, and of birds and beasts and things which go on the earth.

    …sad having to see this development…but knowing it very good from my own past…until God crashed my life, otherwise I would have never got thankful for Jesus Christ, who paid for my sins at the cross, and who is the only hope for everybody to have peace with God and to have not to fear the justice day – as God is ready to forgive, but we are not ready to admit…yes, He has really hard work to do, until someone gives up, as I personally know from myself…but I never regretted, also if this and that from the past life sometimes attracts…I never want to go back into this dependance…

    Btw., as ending of the comment – sorry Mr. Morgan for the long comment:
    When I first read Jeremiah 10 I was instantly reminded of our hope in IT, to change everything (climate, diseases, …) to the better:
    Jeremiah 10:3 ^
    For that which is feared by the people is foolish: it is the work of the hands of the workman; for a tree is cut down by him out of the woods with his axe.
    Jeremiah 10:4 ^
    They make it beautiful with silver and gold; they make it strong with nails and hammers, so that it may not be moved.
    Jeremiah 10:5 ^
    It is like a pillar in a garden of plants, and has no voice: it has to be lifted, for it has no power of walking. Have no fear of it; for it has no power of doing evil and it is not able to do any good.
    Jeremiah 10:6 ^
    There is no one like you, O Lord; you are great and your name is great in power.
    Jeremiah 10:7 ^
    Who would not have fear of you, O King of the nations? for it is your right: for among all the wise men of the nations, and in all their kingdoms, there is no one like you.
    Jeremiah 10:8 ^
    But they are together like beasts and foolish: the teaching of false gods is wood.
    Jeremiah 10:9 ^
    Silver hammered into plates is sent from Tarshish, and gold from Uphaz, the work of the expert workman and of the hands of the gold-worker; blue and purple is their clothing, all the work of expert men.
    Jeremiah 10:10 ^
    But the Lord is the true God; he is the living God and an eternal king: when he is angry, the earth is shaking with fear, and the nations give way before his wrath.
    Jeremiah 10:11 ^
    This is what you are to say to them: The gods who have not made the heavens and the earth will be cut off from the earth and from under the heavens.
    Jeremiah 10:12 ^
    He has made the earth by his power, he has made the world strong in its place by his wisdom, and by his wise design the heavens have been stretched out.
    Jeremiah 10:13 ^
    At the sound of his voice there is a massing of waters in the heavens, and he makes the mists go up from the ends of the earth; he makes the thunder-flames for the rain, and sends out the wind from his store-houses.
    Jeremiah 10:14 ^
    Then every man becomes like a beast without knowledge; every gold-worker is put to shame by the image he has made: for his metal image is deceit, and there is no breath in them.
    Jeremiah 10:15 ^
    They are nothing, a work of error: in the time of their punishment, destruction will overtake them.

  7. Interesting reflections, the God top creation it is the mankind, I think the last human destiny it is survive our actual complex era, autodestruction possibility it is something real, but that event can be ruled out only if true love, repeat true love it is principal world language.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.