• last year
From healthcare chatbots to instantaneous marketing materials, generative AI is revolutionizing how we approach business. Monica Livingston leads the AI Center for Excellence at Intel. In this video, she shares her insights into the world of generative AI, how it differs from other forms of AI, and the technology it requires to operate.

Category

😹
Fun
Transcript
00:00 (upbeat music)
00:01 Generative AI, as it implies in the name,
00:04 is actually generating new content from data.
00:06 So it's trained on very large data sets,
00:09 but the output that you get is brand new content.
00:12 You can even ask it to create content
00:14 like presentation materials or marketing slides.
00:18 So in that context, we'll see generative AI
00:21 in pretty much every industry.
00:23 (upbeat music)
00:28 Hi, my name is Monica Livingston
00:30 and I lead the AI Center of Excellence at Intel.
00:33 Generative AI is very, very new.
00:36 And a lot of the models out there,
00:38 we've not really gotten to the optimization phase.
00:42 And so Intel is focusing on that optimization
00:45 for those models as well to make AI accessible.
00:48 For training, we have our Habana GAU-82 processors
00:53 and we have our discrete GPUs, the GPU Max family.
00:57 And then for inference, for the smaller models,
01:00 we use Intel Xeon scalable processors,
01:03 and specifically the fourth generation
01:05 Intel Xeon scalable processors
01:07 because of that advanced matrix extension
01:10 or AMX engine inside that processor.
01:13 And then we're spending a lot of time
01:15 actually optimizing software for our hardware.
01:19 So for example, we have a tool out of Intel Labs,
01:22 it's called SetFit.
01:24 And what it actually does is enables smaller models
01:28 to run with the same type of accuracy,
01:30 but be trained on a much smaller data set.
01:33 Being able to train on less data,
01:36 again, makes these models much more accessible.
01:39 Most install bases, most data centers
01:40 have fourth generation Intel Xeon scalable processor
01:44 in their infrastructure.
01:45 For us optimizing these types of models on Xeon
01:49 means that they are accessible to those enterprises
01:51 because you already have this infrastructure.
01:53 You're not having to stand out new boxes.
01:56 The good thing about AI is that all open source
01:58 and it's online.
01:59 And so upskilling to AI is really not limited or selected.
02:04 It's very accessible.
02:06 Just think about all of the different
02:08 types of possibilities.
02:10 You can already see very context specific models out there
02:15 for law, for example, for medicine.
02:18 That's extremely helpful to those industries
02:20 because they're heavy paperwork focused.
02:22 And so when you have to do a lot of paperwork,
02:25 doing that with some of these large language models
02:28 saves significant amount of time.
02:30 As generative AI becomes more prevalent
02:32 and AI in general becomes more prevalent,
02:35 Intel continues to explore ways to optimize these models
02:40 and allow customers access to a wide range of hardware
02:44 and software tools in order to run these models
02:47 much more accessibly and much more cost effectively.
02:50 (upbeat music)
02:53 (electronic music)
02:56 (electronic music)
02:59 (electronic music)
03:02 [MUSIC PLAYING]

Recommended