{"id":4722,"date":"2024-04-18T22:49:14","date_gmt":"2024-04-18T22:49:14","guid":{"rendered":"https:\/\/game.intel.com\/?p=4722"},"modified":"2024-05-29T21:16:37","modified_gmt":"2024-05-29T21:16:37","slug":"wield-the-power-of-llms-on-intel-arc-gpus","status":"publish","type":"post","link":"https:\/\/game.intel.com\/mx\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","title":{"rendered":"Aproveche la potencia de los LLM en las GPU Intel\u00ae Arc\u2122."},"content":{"rendered":"<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Ejecute f\u00e1cilmente una variedad de LLM localmente con las GPU Intel\u00ae Arc\u2122.<\/h3>\n<\/blockquote>\n\n\n\n<p>La IA generativa ha cambiado el panorama de lo que es posible en la creaci\u00f3n de contenidos. Esta tecnolog\u00eda tiene el potencial de ofrecer im\u00e1genes, v\u00eddeos y escritos nunca antes imaginados. Los grandes modelos ling\u00fc\u00edsticos (LLM) han sido noticia en la era de la IA, ya que permiten a cualquiera generar letras de canciones, obtener respuestas a preguntas complejas de f\u00edsica o redactar un esquema para una presentaci\u00f3n de diapositivas. Y estas funciones de IA ya no necesitan estar conectadas a la nube o a servicios de suscripci\u00f3n. Pueden ejecutarse localmente en tu propio PC, donde tienes pleno control sobre el modelo para personalizar sus resultados.<\/p>\n\n\n\n<p>En este art\u00edculo, le mostraremos c\u00f3mo configurar y experimentar con modelos de lenguaje grandes populares (LLMs) en un PC con la tarjeta gr\u00e1fica Intel\u00ae Arc\u2122 A770 16GB. Aunque este tutorial har\u00e1 uso del LLM Mistral-7B-Instruct, estos mismos pasos se pueden utilizar con un LLM PyTorch de tu elecci\u00f3n como Phi2, Llama2, etc. Y s\u00ed, \u00a1tambi\u00e9n con el \u00faltimo modelo Llama3!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">IPEX-LLM<\/h2>\n\n\n\n<p>La raz\u00f3n por la que podemos ejecutar una variedad de modelos utilizando la misma instalaci\u00f3n base es gracias a <a href=\"https:\/\/github.com\/intel-analytics\/ipex-llm\">IPEX-LLM<\/a>una biblioteca LLM para PyTorch. Est\u00e1 construida sobre <a href=\"https:\/\/github.com\/intel\/intel-extension-for-pytorch\">Extensi\u00f3n Intel\u00ae para PyTorch<\/a> y contiene optimizaciones LLM de \u00faltima generaci\u00f3n y compresi\u00f3n de pesos de bits bajos (INT4\/FP4\/INT8\/FP8) - con todas las \u00faltimas optimizaciones de rendimiento para hardware Intel. IPEX-LLM aprovecha la tecnolog\u00eda X<sup>e<\/sup>-cores XMX AI acceleration on Intel discrete GPUs like Arc A-series graphics cards for improved performance. Admite gr\u00e1ficos Intel Arc serie A en Windows Subsystem para Linux versi\u00f3n 2, entornos Windows nativos y Linux nativo.<\/p>\n\n\n\n<p>Y como todo esto es PyTorch nativo, puedes cambiar f\u00e1cilmente los modelos PyTorch y los datos de entrada para ejecutarlos en una GPU Intel Arc con aceleraci\u00f3n de alto rendimiento. Este experimento no habr\u00eda estado completo sin una comparaci\u00f3n de rendimiento. Utilizando las instrucciones que aparecen a continuaci\u00f3n para Intel Arc y las que suelen estar disponibles para la competencia, hemos analizado dos GPU discretas situadas en un segmento de precio similar.<\/p>\n\n\n\n<figure data-wp-context=\"{&quot;imageId&quot;:&quot;69dec27d2ae77&quot;}\" data-wp-interactive=\"core\/image\" data-wp-key=\"69dec27d2ae77\" class=\"wp-block-image size-full wp-lightbox-container\"><img fetchpriority=\"high\" width=\"1280\" height=\"720\" data-wp-class--hide=\"state.isContentHidden\" data-wp-class--show=\"state.isContentVisible\" data-wp-init=\"callbacks.setButtonStyles\" data-wp-on--click=\"actions.showLightbox\" data-wp-on--load=\"callbacks.setButtonStyles\" data-wp-on-window--resize=\"callbacks.setButtonStyles\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-LLM-Execution-on-Arc-A770-2.png\" alt=\"\" class=\"wp-image-4782\"><button\n\t\t\tclass=\"lightbox-trigger\"\n\t\t\ttype=\"button\"\n\t\t\taria-haspopup=\"dialog\"\n\t\t\taria-label=\"Agrandar\"\n\t\t\tdata-wp-init=\"callbacks.initTriggerButton\"\n\t\t\tdata-wp-on--click=\"actions.showLightbox\"\n\t\t\tdata-wp-style--right=\"state.imageButtonRight\"\n\t\t\tdata-wp-style--top=\"state.imageButtonTop\"\n\t\t>\n\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"12\" height=\"12\" fill=\"none\" viewbox=\"0 0 12 12\">\n\t\t\t\t<path fill=\"#fff\" d=\"M2 0a2 2 0 0 0-2 2v2h1.5V2a.5.5 0 0 1 .5-.5h2V0H2Zm2 10.5H2a.5.5 0 0 1-.5-.5V8H0v2a2 2 0 0 0 2 2h2v-1.5ZM8 12v-1.5h2a.5.5 0 0 0 .5-.5V8H12v2a2 2 0 0 1-2 2H8Zm2-12a2 2 0 0 1 2 2v2h-1.5V2a.5.5 0 0 0-.5-.5H8V0h2Z\" \/>\n\t\t\t<\/svg>\n\t\t<\/button><\/figure>\n\n\n\n<p>Por ejemplo, al ejecutar el modelo Mistral 7B con la biblioteca IPEX-LLM, la tarjeta gr\u00e1fica Arc A770 de 16 GB puede procesar 70 tokens por segundo (TPS), es decir, 70% m\u00e1s TPS que la GeForce RTX 4060 de 8 GB utilizando CUDA. \u00bfQu\u00e9 significa esto? Una regla general es que 1 token equivale a 0,75 de una palabra y una buena comparaci\u00f3n es el <a href=\"https:\/\/wordsrated.com\/speed-reading-statistics\/\">velocidad media de lectura humana de 4 palabras por segundo<\/a> o 5,3 TPS. La tarjeta gr\u00e1fica Arc A770 de 16 GB puede generar palabras mucho m\u00e1s r\u00e1pido de lo que una persona normal puede leerlas.<\/p>\n\n\n\n<p>Nuestras pruebas internas demuestran que la tarjeta gr\u00e1fica Arc A770 de 16 GB puede ofrecer esta capacidad y un rendimiento competitivo o l\u00edder en una amplia gama de modelos en comparaci\u00f3n con la RTX 4060, lo que convierte a los gr\u00e1ficos Intel Arc en una excelente opci\u00f3n para la ejecuci\u00f3n local de LLM.<\/p>\n\n\n\n<p>Pasemos ahora a las instrucciones de configuraci\u00f3n para empezar a utilizar LLM en tu GPU Arc serie A.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Instrucciones de instalaci\u00f3n<\/h2>\n\n\n\n<p>Tambi\u00e9n podemos consultar esta p\u00e1gina para configurar el entorno: <a href=\"https:\/\/ipex-llm.readthedocs.io\/en\/latest\/doc\/LLM\/Quickstart\/install_windows_gpu.html\">Instalar IPEX-LLM en Windows con GPU Intel - Documentaci\u00f3n m\u00e1s reciente sobre IPEX-LLM<\/a><\/p>\n\n\n\n<p>1. Desactiva la GPU integrada en el administrador de dispositivos.<\/p>\n\n\n\n<p>2. Descargue e instale <a href=\"https:\/\/www.anaconda.com\/download\">Anaconda<\/a>.<\/p>\n\n\n\n<p>3. Una vez finalizada la instalaci\u00f3n, abra el men\u00fa Inicio, busque Anaconda Prompt, ejec\u00fatelo como administrador y cree un entorno virtual utilizando los siguientes comandos. Introduzca cada comando por separado:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>conda create -n llm python=3.10.6\n\nconda activar llm\n\nconda install libuv\n\npip install dpcpp-cpp-rt==2024.0.2 mkl-dpcpp==2024.0.0 onednn==2024.0.0 gradio\n\npip install --pre --upgrade ipex-llm[xpu] --extra-index-url https:\/\/pytorch-extension.intel.com\/release-whl\/stable\/xpu\/us\/\n\npip install transformadores==4.38.0<\/code><\/pre>\n\n\n\n<p>4. Cree un documento de texto llamado demo.py y gu\u00e1rdelo en C:\\su_nombre_de_usuario\\sus_documentos o en el directorio de su elecci\u00f3n.<\/p>\n\n\n\n<p>5. Abra demo.py con su editor favorito y copie en \u00e9l el siguiente ejemplo de c\u00f3digo:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from transformadores import AutoTokenizer\nfrom ipex_llm.transformers import AutoModelForCausalLM\nimport antorcha\nimport intel_extension_for_pytorch\n\ndevice = \"xpu\" # el dispositivo en el que cargar el modelo\n\nmodel_id = \"mistralai\/Mistral-7B-Instruct-v0.2\" # id del modelo huggingface\n\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, torch_dtype=torch.float16)\nmodel = model.to(dispositivo)\n\nmensajes = [\n    {\"rol\": \"user\", \"content\": \"\u00bfCu\u00e1l es tu condimento favorito?\"},\n    {\"rol\": \"asistente\", \"contenido\": \"Bueno, me gusta un buen chorro de zumo de lim\u00f3n fresco. Le da el toque justo de sabor a todo lo que preparo en la cocina\"},\n    {\"role\": \"user\", \"content\": \"\u00bfTienes recetas con mayonesa?\"}\n]\n\nencodeds = tokenizer.apply_chat_template(messages, return_tensors=\"pt\")\n\nmodel_inputs = encodeds.to(dispositivo)\nmodel.to(dispositivo)\n\ngenerated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)\ndecoded = tokenizer.batch_decode(generated_ids)\nprint(decodificado[0])<\/code><\/pre>\n\n\n\n<p class=\"has-small-font-size\"><em>C\u00f3digo creado a partir del c\u00f3digo de ejemplo <a href=\"https:\/\/huggingface.co\/mistralai\/Mistral-7B-Instruct-v0.2\">en este repositorio<\/a>.<\/em><\/p>\n\n\n\n<p>6. Guarde demo.py. En Anaconda, navega hasta el directorio donde se encuentra demo.py utilizando el comando cd, y ejecuta el siguiente comando en el prompt de Anaconda:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python demo.py<\/code><\/pre>\n\n\n\n<p>\u00a1Ahora puedes conseguir una buena receta para hacer mayonesa!<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img fetchpriority=\"high\" width=\"1024\" height=\"213\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-mayo-recipe-1024x213.png\" alt=\"\" class=\"wp-image-4746\"><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Cambio de modelos<\/h2>\n\n\n\n<p>Utilizando el mismo entorno que hemos configurado anteriormente, puedes experimentar con otros modelos populares en Hugging Face como llama2-7B-chat-hf, llama3-8B-it, phi-2, gemma-7B-i, y stablelm2 sustituyendo el id del modelo Hugging Face anterior en demo.py.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>model_id = \"mistralai\/Mistral-7B-Instruct-v0.2\" # huggingface model id\n\na\n\nmodel_id = \"stabilityai\/stablelm-2-zephyr-1_6b\" # id de modelo huggingface<\/code><\/pre>\n\n\n\n<p>Diferentes modelos pueden requerir una versi\u00f3n diferente del paquete transformers, si se encuentra con errores al iniciar demo.py, siga los siguientes pasos para actualizar\/desactualizar transformers:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Abrir Anaconda Prompt<\/li>\n\n\n\n<li>conda activa llm<\/li>\n\n\n\n<li>pip install transformadores==4.37.0<\/li>\n<\/ol>\n\n\n\n<p><strong>Versiones de transformadores verificadas:<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table is-style-regular\"><table class=\"has-fixed-layout\"><thead><tr><th class=\"has-text-align-center\" data-align=\"center\">Modelo ID<\/th><th class=\"has-text-align-center\" data-align=\"center\">Versiones de paquetes de transformadores<\/th><\/tr><\/thead><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\">meta-llama\/Llama-2-7b-chat-hf<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.37.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">meta-llama\/Meta-Llama-3-8B-Instruct<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.37.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">estabilidadai\/stablelm-2-zephyr-1_6b<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">mistralai\/Mistral-7B-Instrucci\u00f3n-v0.2<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">microsoft\/phi-2<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">google\/gemma-7b-it<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.1<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">THUDM\/chatglm3-6b<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.38.0<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Los requisitos de memoria pueden variar seg\u00fan el modelo y el marco. Para el Intel Arc A750 8GB que funciona con IPEX-LLM, recomendamos utilizar Llama-2-7B-chat-hf, Mistral-7B-Instruct-v0.2, phi-2 o chatglm3-6B.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Implementaci\u00f3n de un ChatBot WebUI<\/h2>\n\n\n\n<p>Ahora pasemos a implementar un chatbot webui de Gradio para una mejor experiencia usando tu navegador web. Para obtener m\u00e1s informaci\u00f3n sobre la implementaci\u00f3n de un chatbot interactivo con LLM, visita <a href=\"https:\/\/www.gradio.app\/guides\/creating-a-chatbot-fast\">https:\/\/www.gradio.app\/guides\/creating-a-chatbot-fast<\/a><\/p>\n\n\n\n<p>1. Crea un documento llamado chatbot_gradio.py en el editor de texto que prefieras.<\/p>\n\n\n\n<p>2. Copia y pega el siguiente fragmento de c\u00f3digo en chatbot_gradio.py:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>importar gradio como gr\nimportar antorcha\nimport intel_extension_for_pytorch\nfrom ipex_llm.transformers import AutoModelForCausalLM\nfrom transformers import AutoTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer\nfrom threading import Thread\n\nmodel_id = \"mistralai\/Mistral-7B-Instruct-v0.2\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, optimize_model=True, load_in_4bit=True, torch_dtype=torch.float16)\nmodel = model.half()\nmodel = model.to(\"xpu\")\nclase StopOnTokens(StoppingCriteria):\n    def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -&gt; bool:\n        stop_ids = [29, 0]\n        para stop_id en stop_ids:\n            if input_ids[0][-1] == stop_id:\n                return True\n        devuelve False\n\ndef predecir(mensaje, historial):\n    stop = StopOnTokens()\n    formato_historia = []\n    para humano, asistente en historial:\n        history_format.append({\"rol\": \"usuario\", \"contenido\": humano })\n        history_format.append({\"rol\": \"asistente\", \"contenido\":asistente})\n    history_format.append({\"rol\": \"usuario\", \"contenido\": mensaje})\n\n    prompt = tokenizer.apply_chat_template(history_format, tokenize=False, add_generation_prompt=True)\n    model_inputs = tokenizer(prompt, return_tensors=\"pt\").to(\"xpu\")\n    streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)\n    generar_kwargs = dict(\n        entradas_modelo,\n        streamer=streamer,\n        max_new_tokens=300,\n        do_sample=True,\n        top_p=0,95,\n        top_k=20,\n        temperature=0,8,\n        num_beams=1,\n        pad_token_id=tokenizer.eos_token_id,\n        stopping_criteria=StoppingCriteriaList([stop])\n        )\n    t = Thread(target=model.generate, kwargs=generate_kwargs)\n    t.start()\n\n    mensaje_parcial = \"\"\n    para new_token en streamer:\n        if nuevo_token != '&lt;&#039;:\n            mensaje_parcial += ficha_nueva\n            yield mensaje_parcial\n\ngr.ChatInterface(predict).launch()<\/code><\/pre>\n\n\n\n<p>3. Abra un nuevo prompt de anaconda e introduzca los siguientes comandos:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>pip install gradio<\/li>\n\n\n\n<li>conda activa llm<\/li>\n\n\n\n<li>cd al directorio en el que se encuentra chat_gradio.py<\/li>\n\n\n\n<li>python chatbot_gradio.py<\/li>\n<\/ul>\n\n\n\n<p>4. Abre tu navegador web y navega hasta 127.0.0.1:7860. Deber\u00edas ver un chatbot configurado con el modelo de lenguaje mistral-7b-instruct-v0.2. Ahora tienes una webui de aspecto elegante para tu chatbot.<\/p>\n\n\n\n<p>5. Haz una pregunta para iniciar una conversaci\u00f3n con tu chatbot.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"1469\" height=\"874\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-chatbot-Q-and-A.png\" alt=\"\" class=\"wp-image-4745\"><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\"\/>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Avisos y exenciones de responsabilidad<\/h3>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<p>El rendimiento var\u00eda seg\u00fan el uso, la configuraci\u00f3n y otros factores. M\u00e1s informaci\u00f3n en <a href=\"https:\/\/edc.intel.com\/content\/www\/us\/en\/products\/performance\/benchmarks\/overview\/\">Sitio del \u00cdndice de Rendimiento<\/a>.<\/p>\n\n\n\n<p>Los resultados de rendimiento se basan en pruebas realizadas a partir de las fechas indicadas en las configuraciones y pueden no reflejar todas las actualizaciones disponibles p\u00fablicamente. Consulte la copia de seguridad para conocer los detalles de la configuraci\u00f3n. Ning\u00fan producto o componente puede ser absolutamente seguro.<\/p>\n\n\n\n<p>Los resultados que se basan en sistemas y componentes de preproducci\u00f3n, as\u00ed como los resultados que se han estimado o simulado utilizando una Plataforma de referencia Intel (un nuevo sistema interno de ejemplo), an\u00e1lisis internos de Intel o simulaci\u00f3n o modelado de arquitectura, se le proporcionan \u00fanicamente con fines informativos. Los resultados pueden variar en funci\u00f3n de futuros cambios en cualquier sistema, componente, especificaci\u00f3n o configuraci\u00f3n.<\/p>\n\n\n\n<p>Sus costes y resultados pueden variar.<\/p>\n\n\n\n<p>Las tecnolog\u00edas Intel pueden requerir la activaci\u00f3n de hardware, software o servicios.<\/p>\n\n\n\n<p>\u00a9 Intel Corporation. Intel, el logotipo Intel, Arc y otras marcas Intel son marcas comerciales de Intel Corporation o sus filiales.<\/p>\n\n\n\n<p>*Otros nombres y marcas pueden ser reivindicados como propiedad de terceros.<\/p>\n<\/div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" width=\"1280\" height=\"720\" src=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-System-Configuration-and-Workloads.png\" alt=\"\" class=\"wp-image-4739\" style=\"object-fit:cover\"><\/figure>","protected":false},"excerpt":{"rendered":"<p>Generative AI has changed the landscape of what\u2019s possible in content creation. This technology has the potential to deliver previously unimagined images, videos and writing. Learn how to set up and experiment with popular large language models (LLMs) from the AI community Huggingface on a PC with the Intel\u00ae Arc\u2122 A770 16GB graphics card. <\/p>","protected":false},"author":27,"featured_media":4738,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[6],"tags":[45,48,49,14,47],"class_list":["post-4722","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-intel-arc","tag-ai","tag-generative-ai","tag-huggingface","tag-intel-arc-graphics","tag-llms"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.8 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/game.intel.com\/mx\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\" \/>\n<meta property=\"og:locale\" content=\"es_MX\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access\" \/>\n<meta property=\"og:description\" content=\"Generative AI has changed the landscape of what\u2019s possible in content creation. This technology has the potential to deliver previously unimagined images, videos and writing. Learn how to set up and experiment with popular large language models (LLMs) from the AI community Huggingface on a PC with the Intel\u00ae Arc\u2122 A770 16GB graphics card.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/game.intel.com\/mx\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\" \/>\n<meta property=\"og:site_name\" content=\"Intel Gaming Access\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-18T22:49:14+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-29T21:16:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"768\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Intel Gaming\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@IntelGaming\" \/>\n<meta name=\"twitter:site\" content=\"@IntelGaming\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Intel Gaming\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\"},\"author\":{\"name\":\"Intel Gaming\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e\"},\"headline\":\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs\",\"datePublished\":\"2024-04-18T22:49:14+00:00\",\"dateModified\":\"2024-05-29T21:16:37+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\"},\"wordCount\":1075,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/game.intel.com\/us\/#organization\"},\"image\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"keywords\":[\"AI\",\"Generative AI\",\"Huggingface\",\"intel arc graphics\",\"LLMs\"],\"articleSection\":[\"Intel\u00ae Arc\u2122 Graphics\"],\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\",\"url\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\",\"name\":\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access\",\"isPartOf\":{\"@id\":\"https:\/\/game.intel.com\/us\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"datePublished\":\"2024-04-18T22:49:14+00:00\",\"dateModified\":\"2024-05-29T21:16:37+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb\"},\"inLanguage\":\"es\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage\",\"url\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"contentUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png\",\"width\":1280,\"height\":768},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/game.intel.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/game.intel.com\/us\/#website\",\"url\":\"https:\/\/game.intel.com\/us\/\",\"name\":\"Intel Gaming Access\",\"description\":\"Made to Game. Ready for Anything.\",\"publisher\":{\"@id\":\"https:\/\/game.intel.com\/us\/#organization\"},\"alternateName\":\"game.intel.com\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/game.intel.com\/us\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"es\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/game.intel.com\/us\/#organization\",\"name\":\"Intel Gaming Access\",\"url\":\"https:\/\/game.intel.com\/us\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png\",\"contentUrl\":\"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png\",\"width\":800,\"height\":800,\"caption\":\"Intel Gaming Access\"},\"image\":{\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/IntelGaming\",\"https:\/\/www.instagram.com\/intelgaming\/\",\"https:\/\/discord.gg\/intel\",\"https:\/\/www.youtube.com\/@intelgaming\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e\",\"name\":\"Intel Gaming\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"es\",\"@id\":\"https:\/\/game.intel.com\/us\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g\",\"caption\":\"Intel Gaming\"},\"url\":\"https:\/\/game.intel.com\/mx\/stories\/author\/caton-lai-intel\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/game.intel.com\/mx\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","og_locale":"es_MX","og_type":"article","og_title":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access","og_description":"Generative AI has changed the landscape of what\u2019s possible in content creation. This technology has the potential to deliver previously unimagined images, videos and writing. Learn how to set up and experiment with popular large language models (LLMs) from the AI community Huggingface on a PC with the Intel\u00ae Arc\u2122 A770 16GB graphics card.","og_url":"https:\/\/game.intel.com\/mx\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","og_site_name":"Intel Gaming Access","article_published_time":"2024-04-18T22:49:14+00:00","article_modified_time":"2024-05-29T21:16:37+00:00","og_image":[{"width":1280,"height":768,"url":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","type":"image\/png"}],"author":"Intel Gaming","twitter_card":"summary_large_image","twitter_creator":"@IntelGaming","twitter_site":"@IntelGaming","twitter_misc":{"Written by":"Intel Gaming","Est. reading time":"8 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#article","isPartOf":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/"},"author":{"name":"Intel Gaming","@id":"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e"},"headline":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs","datePublished":"2024-04-18T22:49:14+00:00","dateModified":"2024-05-29T21:16:37+00:00","mainEntityOfPage":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/"},"wordCount":1075,"commentCount":0,"publisher":{"@id":"https:\/\/game.intel.com\/us\/#organization"},"image":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage"},"thumbnailUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","keywords":["AI","Generative AI","Huggingface","intel arc graphics","LLMs"],"articleSection":["Intel\u00ae Arc\u2122 Graphics"],"inLanguage":"es","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","url":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/","name":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs | Intel Gaming Access","isPartOf":{"@id":"https:\/\/game.intel.com\/us\/#website"},"primaryImageOfPage":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage"},"image":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage"},"thumbnailUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","datePublished":"2024-04-18T22:49:14+00:00","dateModified":"2024-05-29T21:16:37+00:00","breadcrumb":{"@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb"},"inLanguage":"es","potentialAction":[{"@type":"ReadAction","target":["https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/"]}]},{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#primaryimage","url":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","contentUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2024\/04\/LLM-Blog-041824-llmA770-header.png","width":1280,"height":768},{"@type":"BreadcrumbList","@id":"https:\/\/game.intel.com\/stories\/wield-the-power-of-llms-on-intel-arc-gpus\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/game.intel.com\/"},{"@type":"ListItem","position":2,"name":"Wield The Power of LLMs On Intel\u00ae Arc\u2122 GPUs"}]},{"@type":"WebSite","@id":"https:\/\/game.intel.com\/us\/#website","url":"https:\/\/game.intel.com\/us\/","name":"Acceso Intel para juegos","description":"Made to Game. Ready for Anything.","publisher":{"@id":"https:\/\/game.intel.com\/us\/#organization"},"alternateName":"game.intel.com","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/game.intel.com\/us\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"es"},{"@type":"Organization","@id":"https:\/\/game.intel.com\/us\/#organization","name":"Acceso Intel para juegos","url":"https:\/\/game.intel.com\/us\/","logo":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/","url":"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png","contentUrl":"https:\/\/game.intel.com\/wp-content\/uploads\/2026\/01\/square-logo.png","width":800,"height":800,"caption":"Intel Gaming Access"},"image":{"@id":"https:\/\/game.intel.com\/us\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/IntelGaming","https:\/\/www.instagram.com\/intelgaming\/","https:\/\/discord.gg\/intel","https:\/\/www.youtube.com\/@intelgaming"]},{"@type":"Person","@id":"https:\/\/game.intel.com\/us\/#\/schema\/person\/5a9260725321b6f9dc6b73c2048fb49e","name":"Intel Gaming","image":{"@type":"ImageObject","inLanguage":"es","@id":"https:\/\/game.intel.com\/us\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d0fc339c682b4163337309e3b6555e83e4859911e42cdd1109d7b1ddb454cbfb?s=96&d=mm&r=g","caption":"Intel Gaming"},"url":"https:\/\/game.intel.com\/mx\/stories\/author\/caton-lai-intel\/"}]}},"_links":{"self":[{"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/posts\/4722","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/comments?post=4722"}],"version-history":[{"count":0,"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/posts\/4722\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/media\/4738"}],"wp:attachment":[{"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/media?parent=4722"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/categories?post=4722"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/game.intel.com\/mx\/wp-json\/wp\/v2\/tags?post=4722"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}