He coats that stance with a ‘consumer advocacy’ coat of paint but it’s nonsensical. He seems to think that Intel, AMD, and Nvidia should only work on incremental upgrades and never deviate from the current paradigm because it’s ‘for the consumer’
It may have not come out in this video, but the whole argument he's been making is that all these shops (Intel, AMD, nVidia, Micron) have been getting CHIPS Act money. Tax payer money, funding something that's ultimately fucking over the taxpayer/regular PC building consumer. If it was just a matter of companies shifting and leaving consumer markets, that'd suck, but it'd still feel like regular a "free market" situation. But it's not. It's tax payer money subsidizing to enterprise products.
He actually had some positive to say about big N's server bit, which is impressive considering it was 99% Jensen performing a humiliation ritual.
I'm glad he did go over the impressive stuff in the second act of this video, and didn't just focus on the negative. Jensen talking about the machines being able to "think" and "reason" shows he either has no fucking idea at all how predictive token generation in LLMs actually works, or he fully realizes it's bullshit but is saying all the marketing speak that was written for him anyway. It doesn't matter if the teleprompter goes off and he has to improvise, because what was written on there literally didn't make any sense to begin with.
Who is all of this for? We're seeing OpenAI and Antrophic struggle to make a return, even with companies buying up Github Co-pilot licenses and consumers actually paying for the $200+ GPT Pro plans to get the older models to regain access to their GPT girlfriends.
If you actually run this shit at home, you'll immediately see how computationally expensive it all is. I've got comfyUI running on an AMD Pro R9700 and an older Pro W6800. For some of the more complex models, generating an image takes about the same amount of time on both cards (~18min for a batch of 8~10, depending on the prompt and LORAs loaded). The newer R9700 is faster at loading/switching models with the newer gen of PCI-E. From what I've read, this pales in comparison to even the lowest-end nVidia chips, but a 5070 isn't going to have 32GB of Ram. Rendering in Linux on an AMD card also slows the machine to a crawl and makes it unusable, so I'm often running renders on the machine I'm not currently using.
So maybe an nVidia card would be nice for some of this stuff, but the useful ones are still insanely expensive. Most DGX units are $3k~$5k, and they often come in two packs for people who I guess think a set of AI bricks is worth the price of a usable 2000s Honda Civic.