{"id":4722,"date":"2024-04-18T22:49:14","date_gmt":"2024-04-18T22:49:14","guid":{"rendered":"https:\/\/game.intel.com\/?p=4722"},"modified":"2024-05-29T21:16:37","modified_gmt":"2024-05-29T21:16:37","slug":"wield-the-power-of-llms-on-intel-arc-gpus","status":"publish","type":"post","link":"https:\/\/game.intel.com\/fr\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","title":{"rendered":"La puissance des LLM sur les GPU Intel\u00ae Arc\u2122"},"content":{"rendered":"<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Ex\u00e9cuter facilement une vari\u00e9t\u00e9 de LLM localement avec les GPU Intel\u00ae Arc\u2122<\/h3>\n<\/blockquote>\n\n\n\n<p>L'IA g\u00e9n\u00e9rative a chang\u00e9 le paysage de ce qui est possible en mati\u00e8re de cr\u00e9ation de contenu. Cette technologie a le potentiel de produire des images, des vid\u00e9os et des \u00e9crits inimaginables jusqu'\u00e0 pr\u00e9sent. Les grands mod\u00e8les de langage (LLM) ont fait les gros titres \u00e0 l'\u00e8re de l'IA, permettant \u00e0 quiconque de g\u00e9n\u00e9rer des paroles de chansons, d'obtenir des r\u00e9ponses \u00e0 des questions de physique complexes ou de r\u00e9diger le plan d'une pr\u00e9sentation de diapositives. Ces fonctions d'IA n'ont plus besoin d'\u00eatre connect\u00e9es au nuage ou \u00e0 des services d'abonnement. Elles peuvent \u00eatre ex\u00e9cut\u00e9es localement sur votre propre PC, o\u00f9 vous avez un contr\u00f4le total sur le mod\u00e8le afin de personnaliser ses r\u00e9sultats.<\/p>\n\n\n\n<p>Dans cet article, nous allons vous montrer comment configurer et exp\u00e9rimenter les grands mod\u00e8les de langage (LLM) populaires sur un PC \u00e9quip\u00e9 de la carte graphique Intel\u00ae Arc\u2122 A770 16 Go. Bien que ce tutoriel utilise le LLM Mistral-7B-Instruct, ces m\u00eames \u00e9tapes peuvent \u00eatre utilis\u00e9es avec un LLM PyTorch de votre choix tel que Phi2, Llama2, etc. Et oui, avec le dernier mod\u00e8le Llama3 aussi !<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">IPEX-LLM<\/h2>\n\n\n\n<p>La raison pour laquelle nous pouvons faire fonctionner une vari\u00e9t\u00e9 de mod\u00e8les en utilisant la m\u00eame installation de base est due \u00e0 <a href=\"https:\/\/github.com\/intel-analytics\/ipex-llm\">IPEX-LLM<\/a>une biblioth\u00e8que LLM pour PyTorch. Elle est construite au dessus de <a href=\"https:\/\/github.com\/intel\/intel-extension-for-pytorch\">Extension Intel\u00ae pour PyTorch<\/a> et contient des optimisations LLM de pointe et une compression des poids \u00e0 faible bit (INT4\/FP4\/INT8\/FP8) - avec toutes les derni\u00e8res optimisations de performance pour le mat\u00e9riel Intel. IPEX-LLM tire parti de la technologie X<sup>e<\/sup>-XMX AI acc\u00e9l\u00e8re les GPU discrets d'Intel, tels que les cartes graphiques de la s\u00e9rie Arc A, pour am\u00e9liorer les performances. Il prend en charge les cartes graphiques Intel Arc s\u00e9rie A sur le sous-syst\u00e8me Windows pour Linux version 2, les environnements Windows natifs et Linux natif.<\/p>\n\n\n\n<p>Et comme tout ceci est du PyTorch natif, vous pouvez facilement \u00e9changer les mod\u00e8les PyTorch et les donn\u00e9es d'entr\u00e9e pour les ex\u00e9cuter sur un GPU Intel Arc avec une acc\u00e9l\u00e9ration de haute performance. Cette exp\u00e9rience n'aurait pas \u00e9t\u00e9 compl\u00e8te sans une comparaison des performances. En utilisant les instructions ci-dessous pour Intel Arc et les instructions couramment disponibles pour la concurrence, nous avons examin\u00e9 deux GPU discrets positionn\u00e9s dans un segment de prix similaire.<\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69df6db6b5294&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69df6db6b5294\" class=\"wp-block-image size-full wp-lightbox-container\"><img fetchpriority=\"high\" width=\"1280\" height=\"720\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-LLM-Execution-on-Arc-A770-2.png\" alt=\"\" class=\"wp-image-4782\"><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Agrandir\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure>\n\n\n\n<p>Par exemple, en ex\u00e9cutant le mod\u00e8le Mistral 7B avec la biblioth\u00e8que IPEX-LLM, la carte graphique Arc A770 16 Go peut traiter 70 jetons par seconde (TPS), soit 70% TPS de plus que la GeForce RTX 4060 8 Go utilisant CUDA. Qu'est-ce que cela signifie ? En r\u00e8gle g\u00e9n\u00e9rale, un jeton \u00e9quivaut \u00e0 0,75 mot. <a href=\"https:\/\/wordsrated.com\/speed-reading-statistics\/\">la vitesse de lecture humaine moyenne est de 4 mots par seconde<\/a> ou 5,3 TPS. La carte graphique Arc A770 16GB peut g\u00e9n\u00e9rer des mots beaucoup plus rapidement que le commun des mortels ne peut les lire !<\/p>\n\n\n\n<p>Nos tests internes montrent que la carte graphique Arc A770 16GB peut fournir cette capacit\u00e9 et des performances comp\u00e9titives ou de pointe sur une large gamme de mod\u00e8les par rapport \u00e0 la RTX 4060, ce qui fait des cartes graphiques Intel Arc un excellent choix pour l'ex\u00e9cution locale de LLM.<\/p>\n\n\n\n<p>Passons maintenant aux instructions de configuration pour vous permettre de commencer \u00e0 utiliser les LLM sur votre GPU Arc A-series.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Instructions d'installation<\/h2>\n\n\n\n<p>Nous pouvons \u00e9galement nous r\u00e9f\u00e9rer \u00e0 cette page pour la mise en place de l'environnement : <a href=\"https:\/\/ipex-llm.readthedocs.io\/en\/latest\/doc\/LLM\/Quickstart\/install_windows_gpu.html\">Installer IPEX-LLM sur Windows avec un GPU Intel - Derni\u00e8re documentation sur IPEX-LLM<\/a><\/p>\n\n\n\n<p>1. D\u00e9sactiver le GPU int\u00e9gr\u00e9 dans le gestionnaire de p\u00e9riph\u00e9riques.<\/p>\n\n\n\n<p>2. T\u00e9l\u00e9charger et installer <a href=\"https:\/\/www.anaconda.com\/download\">Anaconda<\/a>.<\/p>\n\n\n\n<p>3. Une fois l'installation termin\u00e9e, ouvrez le menu D\u00e9marrer, recherchez Anaconda Prompt, ex\u00e9cutez-le en tant qu'administrateur et cr\u00e9ez un environnement virtuel \u00e0 l'aide des commandes suivantes. Saisissez chaque commande s\u00e9par\u00e9ment :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>conda create -n llm python=3.10.6\n\nconda activate llm\n\nconda install libuv\n\npip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0 gradio\n\npip install --pre --upgrade ipex-llm[xpu] --extra-index-url https:\/\/pytorch-extension.intel.com\/release-whl\/stable\/xpu\/us\/\n\npip install transformers==4.38.0<\/code><\/pre>\n\n\n\n<p>4. Cr\u00e9ez un document texte nomm\u00e9 demo.py et enregistrez-le dans C:\\NUsers\\NVotre_nom_d'utilisateur\\NDocuments ou dans le r\u00e9pertoire de votre choix.<\/p>\n\n\n\n<p>5. Ouvrez demo.py avec votre \u00e9diteur pr\u00e9f\u00e9r\u00e9 et copiez-y l'exemple de code suivant :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from transformers import AutoTokenizer\nfrom ipex_llm.transformers import AutoModelForCausalLM\nimport torch\nimport intel_extension_for_pytorch\n\ndevice = \"xpu\" # le dispositif sur lequel charger le mod\u00e8le\n\nmodel_id = \"mistralai\/Mistral-7B-Instruct-v0.2\" # identifiant du mod\u00e8le de visage \u00e9treint\n\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, torch_dtype=torch.float16)\nmodel = model.to(device)\n\nmessages = [\n    {\"role\" : \"user\", \"content\" : \"Quel est votre condiment pr\u00e9f\u00e9r\u00e9 ?\"},\n    {\"r\u00f4le\" : \"assistant\", \"content\" : \"Eh bien, j'aime bien presser du jus de citron frais. Il ajoute juste ce qu'il faut de saveur piquante \u00e0 tout ce que je pr\u00e9pare dans la cuisine !\"},\n    {\"role\" : \"user\", \"content\" : \"Do you have mayonnaise recipes ?\"}\n]\n\nencodeds = tokenizer.apply_chat_template(messages, return_tensors=\"pt\")\n\nmodel_inputs = encodeds.to(device)\nmodel.to(device)\n\ngenerated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)\ndecoded = tokenizer.batch_decode(generated_ids)\nprint(decoded[0])<\/code><\/pre>\n\n\n\n<p class=\"has-small-font-size\"><em>Code construit \u00e0 partir de l'exemple de code <a href=\"https:\/\/huggingface.co\/mistralai\/Mistral-7B-Instruct-v0.2\">dans ce d\u00e9p\u00f4t<\/a>.<\/em><\/p>\n\n\n\n<p>6. Sauvegardez demo.py. Dans Anaconda, naviguez jusqu'au r\u00e9pertoire o\u00f9 se trouve demo.py en utilisant la commande cd, et ex\u00e9cutez la commande suivante dans l'invite d'Anaconda :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python demo.py<\/code><\/pre>\n\n\n\n<p>Vous pouvez maintenant obtenir une bonne recette pour pr\u00e9parer la mayonnaise !<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" width=\"1024\" height=\"213\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-mayo-recipe-1024x213.png\" alt=\"\" class=\"wp-image-4746\"><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Changer de mod\u00e8le<\/h2>\n\n\n\n<p>En utilisant le m\u00eame environnement que nous avons mis en place ci-dessus, vous pouvez exp\u00e9rimenter d'autres mod\u00e8les populaires sur Hugging Face tels que llama2-7B-chat-hf, llama3-8B-it, phi-2, gemma-7B-i, et stablelm2 en rempla\u00e7ant le mod\u00e8le Hugging Face id ci-dessus dans demo.py.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>model_id = \"mistralai\/Mistral-7B-Instruct-v0.2\" # huggingface model id\n\n\u00e0\n\nmodel_id = \"stabilityai\/stablelm-2-zephyr-1_6b\" # huggingface model id<\/code><\/pre>\n\n\n\n<p>Si vous rencontrez des erreurs lors du lancement de demo.py, suivez les \u00e9tapes ci-dessous pour mettre \u00e0 niveau ou r\u00e9trograder les transformateurs :<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Ouvrir l'invite Anaconda<\/li>\n\n\n\n<li>conda activer llm<\/li>\n\n\n\n<li>pip install transformers==4.37.0<\/li>\n<\/ol>\n\n\n\n<p><strong>Versions v\u00e9rifi\u00e9es des transformateurs :<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table class=\"has-fixed-layout\"><thead><tr><th class=\"has-text-align-center\" data-align=\"center\">Mod\u00e8le ID<\/th><th class=\"has-text-align-center\" data-align=\"center\">Versions de l'emballage des transformateurs<\/th><\/tr><\/thead><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">meta-llama\/Llama-2-7b-chat-hf<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.37.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">meta-llama\/Meta-Llama-3-8B-Instruct<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.37.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">stabilit\u00e9ai\/stablelm-2-zephyr-1_6b<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">mistralai\/Mistral-7B-Instruct-v0.2<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">microsoft\/phi-2<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">google\/gemma-7b-it<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.1<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">THUDM\/chatglm3-6b<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Les besoins en m\u00e9moire peuvent varier selon le mod\u00e8le et le cadre. Pour l'Intel Arc A750 8GB fonctionnant avec IPEX-LLM, nous recommandons d'utiliser Llama-2-7B-chat-hf, Mistral-7B-Instruct-v0.2, phi-2 ou chatglm3-6B.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Mise en \u0153uvre d'une interface Web ChatBot<\/h2>\n\n\n\n<p>Passons maintenant \u00e0 l'impl\u00e9mentation d'un chatbot Gradio webui pour une meilleure exp\u00e9rience en utilisant votre navigateur web. Pour plus d'informations sur la mise en \u0153uvre d'un chatbot interactif avec les LLM, visitez le site suivant <a href=\"https:\/\/www.gradio.app\/guides\/creating-a-chatbot-fast\">https:\/\/www.gradio.app\/guides\/creating-a-chatbot-fast<\/a><\/p>\n\n\n\n<p>1. Cr\u00e9ez un document nomm\u00e9 chatbot_gradio.py dans l'\u00e9diteur de texte de votre choix.<\/p>\n\n\n\n<p>2. Copiez et collez l'extrait de code suivant dans chatbot_gradio.py :<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Importation de gradio en tant que gr\nimport torch\nimport intel_extension_for_pytorch\nfrom ipex_llm.transformers import AutoModelForCausalLM\nfrom transformers import AutoTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer\nfrom threading import Thread\n\nmodel_id = \"mistralai\/Mistral-7B-Instruct-v0.2\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, optimize_model=True, load_in_4bit=True, torch_dtype=torch.float16)\nmod\u00e8le = mod\u00e8le.half()\nmodel = model.to(\"xpu\")\nclasse StopOnTokens(StoppingCriteria) :\n    def __call__(self, input_ids : torch.LongTensor, scores : torch.FloatTensor, **kwargs) -&gt; bool :\n        stop_ids = [29, 0]\n        pour stop_id dans stop_ids :\n            si input_ids[0][-1] == stop_id :\n                return True\n        retour Faux\n\ndef predict(message, history) :\n    stop = StopOnTokens()\n    history_format = []\n    pour human, assistant dans history :\n        history_format.append({\"role\" : \"user\", \"content\" : human })\n        history_format.append({\"role\" : \"assistant\", \"content\":assistant})\n    history_format.append({\"role\" : \"user\", \"content\" : message})\n\n    prompt = tokenizer.apply_chat_template(history_format, tokenize=False, add_generation_prompt=True)\n    model_inputs = tokenizer(prompt, return_tensors=\"pt\").to(\"xpu\")\n    streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)\n    generate_kwargs = dict(\n        model_inputs,\n        streamer=streamer,\n        max_new_tokens=300,\n        do_sample=True,\n        top_p=0.95,\n        top_k=20,\n        temp\u00e9rature=0,8,\n        num_beams=1,\n        pad_token_id=tokenizer.eos_token_id,\n        stopping_criteria=StoppingCriteriaList([stop])\n        )\n    t = Thread(target=model.generate, kwargs=generate_kwargs)\n    t.start()\n\n    partial_message = \"\"\n    pour new_token dans streamer :\n        if new_token != '&lt;&#039; :\n            partial_message += new_token\n            yield partial_message\n\ngr.ChatInterface(predict).launch()<\/code><\/pre>\n\n\n\n<p>3. Ouvrez une nouvelle invite anaconda et entrez les commandes suivantes :<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>pip install gradio<\/li>\n\n\n\n<li>conda activer llm<\/li>\n\n\n\n<li>cd dans le r\u00e9pertoire o\u00f9 se trouve chat_gradio.py<\/li>\n\n\n\n<li>python chatbot_gradio.py<\/li>\n<\/ul>\n\n\n\n<p>4. Ouvrez votre navigateur web et naviguez vers 127.0.0.1:7860. Vous devriez voir un chatbot configur\u00e9 avec le mod\u00e8le de langage mistral-7b-instruct-v0.2 ! Vous avez maintenant une interface web pour votre chatbot.<\/p>\n\n\n\n<p>5. Posez une question pour entamer une conversation avec votre chatbot.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"1469\" height=\"874\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-chatbot-Q-and-A.png\" alt=\"\" class=\"wp-image-4745\"><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Remarques et avertissements<\/h3>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<p>Les performances varient en fonction de l'utilisation, de la configuration et d'autres facteurs. Pour en savoir plus, consultez la page <a href=\"https:\/\/edc.intel.com\/content\/www\/us\/en\/products\/performance\/benchmarks\/overview\/\">site Performance Index d'Intel<\/a>.<\/p>\n\n\n\n<p>Les r\u00e9sultats des performances sont bas\u00e9s sur des tests effectu\u00e9s aux dates indiqu\u00e9es dans les configurations et peuvent ne pas refl\u00e9ter toutes les mises \u00e0 jour publiquement disponibles. Voir la sauvegarde pour les d\u00e9tails de la configuration. Aucun produit ou composant ne peut \u00eatre absolument s\u00fbr.<\/p>\n\n\n\n<p>Les r\u00e9sultats bas\u00e9s sur des syst\u00e8mes et des composants de pr\u00e9-production, ainsi que les r\u00e9sultats estim\u00e9s ou simul\u00e9s \u00e0 l'aide d'une plate-forme de r\u00e9f\u00e9rence Intel (un exemple interne de nouveau syst\u00e8me), d'une analyse interne Intel ou d'une simulation ou mod\u00e9lisation d'architecture, vous sont fournis \u00e0 titre d'information uniquement. Les r\u00e9sultats peuvent varier en fonction des changements futurs apport\u00e9s aux syst\u00e8mes, composants, sp\u00e9cifications ou configurations.<\/p>\n\n\n\n<p>Vos co\u00fbts et r\u00e9sultats peuvent varier.<\/p>\n\n\n\n<p>Les technologies Intel peuvent \u00eatre soumises au recours \u00e0 des logiciels, des services ou des mat\u00e9riels particuliers.<\/p>\n\n\n\n<p>Intel Corporation. Intel, le logo Intel, Arc et les autres marques Intel sont des marques commerciales d'Intel Corporation ou de ses filiales.<\/p>\n\n\n\n<p>*Les autres marques et noms de marque peuvent \u00eatre r\u00e9clam\u00e9s comme propri\u00e9t\u00e9s d'autres parties.<\/p>\n<\/div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"1280\" height=\"720\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-System-Configuration-and-Workloads.png\" alt=\"\" class=\"wp-image-4739\" style=\"object-fit:cover\"><\/figure>","protected":false},"excerpt":{"rendered":"<p>Generative AI has changed the landscape of what\u2019s possible in content creation. This technology has the potential to deliver previously unimagined images, videos and writing. Learn how to set up and experiment with popular large language models (LLMs) from the AI community Huggingface on a PC with the Intel\u00ae Arc\u2122 A770 16GB graphics card. <\/p>","protected":false},"author":27,"featured_media":4738,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[6],"tags":[45,48,49,14,47],"class_list":["post-4722","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-intel-arc","tag-ai","tag-generative-ai","tag-huggingface","tag-intel-arc-graphics","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/game.intel.com\/fr\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access\" \/>\n<meta property=\"og:description\" content=\"Generative AI has changed the landscape of what\u2019s possible in content creation. This technology has the potential to deliver previously unimagined images, videos and writing. Learn how to set up and experiment with popular large language models (LLMs) from the AI community Huggingface on a PC with the Intel\u00ae Arc\u2122 A770 16GB graphics card.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/game.intel.com\/fr\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\" \/>\n<meta property=\"og:site_name\" content=\"Intel Gaming Access\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-18T22:49:14+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-29T21:16:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"768\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Intel Gaming\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@IntelGaming\" \/>\n<meta name=\"twitter:site\" content=\"@IntelGaming\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Intel Gaming\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\"},\"author\":{\"name\":\"Intel Gaming\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e\"},\"headline\":\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs\",\"datePublished\":\"2024-04-18T22:49:14+00:00\",\"dateModified\":\"2024-05-29T21:16:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\"},\"wordCount\":1075,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/game.intel.com\/us\/#organization\"},\"image\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"keywords\":[\"AI\",\"Generative AI\",\"Huggingface\",\"intel arc graphics\",\"LLMs\"],\"articleSection\":[\"Intel\u00ae Arc\u2122 Graphics\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\",\"url\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\",\"name\":\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access\",\"isPartOf\":{\"@id\":\"https:\/\/game.intel.com\/us\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"datePublished\":\"2024-04-18T22:49:14+00:00\",\"dateModified\":\"2024-05-29T21:16:37+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\",\"url\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"contentUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"width\":1280,\"height\":768},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/game.intel.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/game.intel.com\/us\/#website\",\"url\":\"https:\/\/game.intel.com\/us\/\",\"name\":\"Intel Gaming Access\",\"description\":\"Made to Game. Ready for Anything.\",\"publisher\":{\"@id\":\"https:\/\/game.intel.com\/us\/#organization\"},\"alternateName\":\"game.intel.com\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/game.intel.com\/us\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/game.intel.com\/us\/#organization\",\"name\":\"Intel Gaming Access\",\"url\":\"https:\/\/game.intel.com\/us\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png\",\"contentUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png\",\"width\":800,\"height\":800,\"caption\":\"Intel Gaming Access\"},\"image\":{\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/IntelGaming\",\"https:\/\/www.instagram.com\/intelgaming\/\",\"https:\/\/discord.gg\/intel\",\"https:\/\/www.youtube.com\/@intelgaming\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e\",\"name\":\"Intel Gaming\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g\",\"caption\":\"Intel Gaming\"},\"url\":\"https:\/\/game.intel.com\/fr\/stories\/author\/caton-lai-intel\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/game.intel.com\/fr\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","og_locale":"fr_FR","og_type":"article","og_title":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access","og_description":"Generative AI has changed the landscape of what\u2019s possible in content creation. This technology has the potential to deliver previously unimagined images, videos and writing. Learn how to set up and experiment with popular large language models (LLMs) from the AI community Huggingface on a PC with the Intel\u00ae Arc\u2122 A770 16GB graphics card.","og_url":"https:\/\/game.intel.com\/fr\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","og_site_name":"Intel Gaming Access","article_published_time":"2024-04-18T22:49:14+00:00","article_modified_time":"2024-05-29T21:16:37+00:00","og_image":[{"width":1280,"height":768,"url":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","type":"image\/png"}],"author":"Intel Gaming","twitter_card":"summary_large_image","twitter_creator":"@IntelGaming","twitter_site":"@IntelGaming","twitter_misc":{"Written by":"Intel Gaming","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#article","isPartOf":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/"},"author":{"name":"Intel Gaming","@id":"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e"},"headline":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs","datePublished":"2024-04-18T22:49:14+00:00","dateModified":"2024-05-29T21:16:37+00:00","mainEntityOfPage":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/"},"wordCount":1075,"commentCount":0,"publisher":{"@id":"https:\/\/game.intel.com\/us\/#organization"},"image":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage"},"thumbnailUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","keywords":["AI","Generative AI","Huggingface","intel arc graphics","LLMs"],"articleSection":["Intel\u00ae Arc\u2122 Graphics"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","url":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","name":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access","isPartOf":{"@id":"https:\/\/game.intel.com\/us\/#website"},"primaryImageOfPage":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage"},"image":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage"},"thumbnailUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","datePublished":"2024-04-18T22:49:14+00:00","dateModified":"2024-05-29T21:16:37+00:00","breadcrumb":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage","url":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","contentUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","width":1280,"height":768},{"@type":"BreadcrumbList","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/game.intel.com\/"},{"@type":"ListItem","position":2,"name":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs"}]},{"@type":"WebSite","@id":"https:\/\/game.intel.com\/us\/#website","url":"https:\/\/game.intel.com\/us\/","name":"Acc\u00e8s Intel Gaming","description":"Made to Game. Ready for Anything.","publisher":{"@id":"https:\/\/game.intel.com\/us\/#organization"},"alternateName":"game.intel.com","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/game.intel.com\/us\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/game.intel.com\/us\/#organization","name":"Acc\u00e8s Intel Gaming","url":"https:\/\/game.intel.com\/us\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/","url":"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png","contentUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png","width":800,"height":800,"caption":"Intel Gaming Access"},"image":{"@id":"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/IntelGaming","https:\/\/www.instagram.com\/intelgaming\/","https:\/\/discord.gg\/intel","https:\/\/www.youtube.com\/@intelgaming"]},{"@type":"Person","@id":"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e","name":"Intel Gaming","image":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/game.intel.com\/us\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g","caption":"Intel Gaming"},"url":"https:\/\/game.intel.com\/fr\/stories\/author\/caton-lai-intel\/"}]}},"_links":{"self":[{"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/posts\/4722","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/comments?post=4722"}],"version-history":[{"count":0,"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/posts\/4722\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/media\/4738"}],"wp:attachment":[{"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/media?parent=4722"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/categories?post=4722"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/game.intel.com\/fr\/wp-json\/wp\/v2\/tags?post=4722"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}