In a fireside chat with AMD's Dr. Lisa Su, the super-popular CEO has some words for anyone who might be feeling a little bit concerned about the inexorable rise of modern AI and what the impact of the rabid adoption of the tech might be: "we must go faster, we must experiment, but do it with a watchful eye."
Maybe that's not the most reassuring thing for the AI-concerned among us, the idea that actually things aren't going fast enough, but think for a second… do you really want things to stay as they are right now in the world of AI? Do you want to be in this limbo state where the data used to create the AI models of today comes from some wild west scraping of vaguely private and kinda public data?
Though, to be fair, if you are worried about where AI is going and what it is going to do, slowing down with the current models as they are might actually be more reassuring. For all their proponents and flashy demos, they're currently, fundamentally, a bit rubbish. Sure, they can approximate some text output, and they're getting a bit better at figuring out the right number of fingers and joints in a human hand for image generation, and they're even getting better at coding, too. But the damn things still can't seem to deal with basic mathematics.
AI, as of right now, is like a proof concept. And all the excitement and investment is about what they might become from this point, extrapolated into the future.
All that needs a ton of compute power, which Dr. Lisa Su and AMD are very willing to try and supply you with. So, there is inevitably a little bit of self-serving in her answer. Of course AMD would like it to go faster, with more people adopting EPYC server chips and Instinct graphics cards for their racks of AI machines.
Dr. Su's response came from her fireside chat at the SXSW Conference where she was asked the question: What is AMD's responsibility when it comes to AI?
"All of us as large companies in the AI ecosystem have a responsibility," says Su. "When you think about the things about AI that we have to be careful about, it's how you train models. Making sure that the data that you're using isn't biased in some way. Right now there is all this conversation about, are the models giving you the right answer?"
"I think what we want to do," she continues, "is we want to make sure the models are giving you the best answer they can, and so we do have responsibility in how we implement the data that goes in and some of the models that are there and how we use the data; making sure that we're very protective of people's privacy and intellectual property. So those are absolutely things that we take very seriously.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
"What I will say though is that the answer of those who are a little bit worried about what AI will do: the answer is not go slower, that is definitely not the answer. The answer is to be very cognisant of the risks—so we think about that, but this technology is so powerful, so transformational we must go faster, we must experiment, but do it with a watchful eye."
I would maybe add that in addition to experimenting with a watchful eye, we ought to be doing it with some sort of legislation, some sort of agreed framework within which the hardware and software folk have to work. Because just letting them be billionaires' and venture capitalists' playthings—with which to do with what they will—I don't think is going to work out too well for the rest of us in the long term.