REDES DEL FUTURO PARA CENTROS DE PROCESADO DE DATOS Y OPERADORES

PID2022-136684OB-C22

Nombre agencia financiadora Agencia Estatal de Investigación
Acrónimo agencia financiadora AEI
Programa Programa Estatal para Impulsar la Investigación Científico-Técnica y su Transferencia
Subprograma Subprograma Estatal de Generación de Conocimiento
Convocatoria Proyectos de I+D+I (Generación de Conocimiento y Retos Investigación)
Año convocatoria 2022
Unidad de gestión Plan Estatal de Investigación Científica y Técnica y de Innovación 2021-2023
Centro beneficiario UNIVERSIDAD POLITECNICA DE MADRID
Identificador persistente http://dx.doi.org/10.13039/501100011033

Publicaciones

Found(s) 19 result(s)
Found(s) 1 page(s)

Reliability Evaluation and Fault Tolerant Design for KLL Sketches

Archivo Digital UPM
  • Gao, Zhen
  • Zhu, Jinhua
  • Reviriego Vasallo, Pedro
Quantile estimation is a fundamental task in big data analysis. In order to achieve high-speed estimation under low memory consumption, especially for streaming big data processing, data sketches which provide approximate estimates at low overhead are usually used, and the Karnin-Lang-Liberty (KLL) sketch is one of the most popular options. However, soft errors in KLL memory may significantly degrade estimation performance. In this paper, the influence of soft error on the KLL sketch is considered for the first time. Firstly, the reliability of KLL to soft error is studied through theoretical analysis and fault injection experiments. The evaluation results show that the errors in the KLL construction phase may cause a large deviation in the estimated value. Then, two protection schemes are proposed based on a single parity check (SPC) and on the incremental property (IP) of the KLL memory. Further evaluation shows that the proposed schemes can significantly improve the reliability of KLL, and even remove the effect SEUs on the highest bits. In particular, the SPC scheme that requires additional memory, provides better protection for middle bit positions than the IP scheme which does not introduce any memory overhead.




Empowering Database Learning through Remote Educational Escape Rooms

Archivo Digital UPM
  • Barra Arias, Enrique
  • López Pernas, Sonsoles
  • Gordillo Méndez, Aldo
  • Pozo Huertas, Alejandro
  • Muñoz Arcentales, Andrés
  • Conde Díaz, Javier
Learning about databases is indispensable for individuals studying software engineering or computer science or those involved in the IT industry. We analyzed a remote educational escape room for teaching about databases in four different higher education courses in two consecutive academic years. We employed three instruments for evaluation: a pre- and posttest to assess the escape room's effectiveness for student learning, a questionnaire to gather students' perceptions, and a Web platform that unobtrusively records students' interactions and performance. We show novel evidence that educational escape rooms conducted remotely can be engaging as well as effective for teaching about databases.




Adaptive Resolution Inference (ARI): Energy Efficient Machine Learning for the Internet of Things

Archivo Digital UPM
  • Wang, Zhiheng
  • Reviriego Vasallo, Pedro
  • Niknia, Fabrizio
  • Conde Díaz, Javier
  • Liu, Shanshan
  • Lombardi, Fabrizio
The implementation of Machine Learning (ML) in Internet of Things (IoT) devices poses significant operational challenges due to limited energy and computation resources. In recent years, significant efforts have been made to implement simplified ML models that can achieve reasonable performance while reducing computation and energy, for example by pruning weights in neural networks, or using reduced precision for the parameters and arithmetic operations. However, this type of approach is limited by the performance of the ML implementation, i.e., by the loss for example in accuracy due to the model simplification.

\textcolor{black}{In this paper, we present Adaptive Resolution Inference (ARI), a novel approach that enables to evaluate new trade-offs between energy dissipation and model performance in ML implementations.} The main principle of the proposed approach is to run inferences with reduced precision (quantization) and use the margin over the decision threshold to determine if either the result is reliable, or the inference must run with the full model. The rationale is that quantization only introduces small deviations in the inference scores, such that if the scores have a sufficient margin over the decision threshold, it is very unlikely that the full model would have a different result. Therefore, we can run the quantized model first, and only when the scores do not have a sufficient margin, the full model is run. \textcolor{black}{This enables most inferences to run with the reduced precision model and only a small fraction requires the full model, so significantly reducing computation and energy while not affecting model performance.} The proposed ARI approach is presented, analyzed in detail, and evaluated using different datasets both for floating-point and stochastic computing implementations. The results show that ARI can significantly reduce the energy for inference in different configurations with savings between 40% and 85%.




Adaptive Resolution Inference (ARI): Energy Efficient Machine Learning for the Internet of Things

Archivo Digital UPM
  • Wang, Zhiheng
  • Reviriego Vasallo, Pedro
  • Niknia, Fabrizio
  • Conde Díaz, Javier
  • Liu, Shanshan
  • Lombardi, Fabrizio
The implementation of Machine Learning (ML) in Internet of Things (IoT) devices poses significant operational challenges due to limited energy and computation resources. In recent years, significant efforts have been made to implement simplified ML models that can achieve reasonable performance while reducing computation and energy, for example by pruning weights in neural networks, or using reduced precision for the parameters and arithmetic operations. However, this type of approach is limited by the performance of the ML implementation, i.e., by the loss for example in accuracy due to the model simplification.

\textcolor{black}{In this paper, we present Adaptive Resolution Inference (ARI), a novel approach that enables to evaluate new trade-offs between energy dissipation and model performance in ML implementations.} The main principle of the proposed approach is to run inferences with reduced precision (quantization) and use the margin over the decision threshold to determine if either the result is reliable, or the inference must run with the full model. The rationale is that quantization only introduces small deviations in the inference scores, such that if the scores have a sufficient margin over the decision threshold, it is very unlikely that the full model would have a different result. Therefore, we can run the quantized model first, and only when the scores do not have a sufficient margin, the full model is run. \textcolor{black}{This enables most inferences to run with the reduced precision model and only a small fraction requires the full model, so significantly reducing computation and energy while not affecting model performance.} The proposed ARI approach is presented, analyzed in detail, and evaluated using different datasets both for floating-point and stochastic computing implementations. The results show that ARI can significantly reduce the energy for inference in different configurations with savings between 40% and 85%.




Understanding the Impact of Artificial Intelligence in Academic Writing: Metadata to the Rescue

Archivo Digital UPM
  • Conde Díaz, Javier
  • Reviriego Vasallo, Pedro
  • Salvachúa Rodríguez, Joaquín
  • Martínez Ruiz, Gonzalo
  • Hernández, José Alberto
  • Lombardi, Fabrizio
This column advocates for including artificial intelligence (AI)-specific metadata on those academic papers that are written with the help of AI in an attempt to analyze the use of such tools for disseminating research.




Concurrent Classifier Error Detection (CCED) in Large Scale Machine Learning Systems

Archivo Digital UPM
  • Reviriego Vasallo, Pedro
  • Wang, Ziheng
  • Alonso González, Álvaro
  • Gao, Zhen
  • Niknia, Farzad
  • Liu, Shanshan
  • Lombardi, Fabrizio
The complexity of Machine Learning (ML) systems increases each year. As these systems are widely utilized, ensuring their reliable operation is becoming a design requirement. Traditional error detection mechanisms introduce circuit or time redundancy that significantly impacts system performance. An alternative is the use of Concurrent Error Detection (CED) schemes that operate in parallel with the system and exploit their properties to detect errors. CED is attractive for large ML systems because it can potentially reduce the cost of error detection.
In this paper, we introduce Concurrent Classifier Error Detection (CCED), a scheme to implement CED in ML systems using a concurrent ML classifier to detect errors. CCED identifies a set of check signals in the main ML system and feed them to the concurrent ML classifier that is trained to detect errors.
The proposed CCED scheme has been implemented and evaluated on two widely used large-scale ML models: Contrastive Language–Image Pre-training (CLIP) used for image classification and Bidirectional Encoder Representations from Transformers (BERT) used for natural language applications. The results show that more than 95\% of the errors are detected when using a simple Random Forest classifier that is orders of magnitude simpler than CLIP or BERT.




An autonomous low‑cost studio to record production‑ready instructional videos

Archivo Digital UPM
  • Barra Arias, Enrique
  • Quemada Vives, Juan
  • López Pernas, Sonsoles
  • Gordillo Méndez, Aldo
  • Alonso González, Álvaro
  • Carril Fuentetaja, Abel
Producing high-quality educational videos usually requires a large budget as it involves the use of expensive recording studios, the presence of a technician during the entire recording session and often post-production tasks. The high costs associated with video production represent a major hindrance for many educational institutions and, thus, many teachers regard high-quality video recording as inaccessible. As a remedy to this situation, this article presents SAGA (Autonomous Advanced Recording Studio in its Spanish acronym), a low-cost autonomous recording set that allows teachers to produce educational content in video format in an agile way and without the need for post-production. The article provides an overview of SAGA, including a description of its hardware and software so that anyone with basic technical knowledge can replicate and operate the system. SAGA has been used to record more than 1,500 videos including the contents of six MOOCs hosted on the MiriadaX platform, as well as four courses at UPM. SAGA has been evaluated in two ways: (1) from the video producers’ perspective, it was evaluated with a questionnaire based on the Technology Acceptance Model, and (2) from the video consumers’ perspective, a questionnaire was conducted among MOOC participants to assess the perceived technical quality of the videos recorded with SAGA. The results show a very positive general opinion of the SAGA system, the recorded videos and the technical features thereof. Thus, SAGA represents a good opportunity for all those educational institutions and teachers interested in producing high-quality educational videos at a low cost.




From Multipliers to Integrators: a Survey of Stochastic Computing Primitives

Archivo Digital UPM
  • Liu, Shanshan
  • Rosselló, Josep
  • Liu, Siting
  • Tang, Xiaochen
  • Font Rosselló, Joan
  • Frasser, Christian F.
  • Qian, Weikang
  • Han, Jie
  • Reviriego Vasallo, Pedro
  • Lombardi, Fabrizio
Stochastic Computing (SC) has the potential to dramatically improve important nanoscale circuit metrics, including area and power dissipation, for implementing complex digital computing systems, such as large neural networks, filters, or decoders, among others. This paper reviews the state-of-the-art design of important SC building blocks covering both arithmetic circuits, including multipliers, adders, and dividers, and finite state machines (FSMs) that are needed for numerical integration, accumulation, and activation functions in neural networks. For arithmetic circuits, we review newly proposed schemes, such as Delta Sigma Modulator-based dividers providing accurate and low latency computation, as well as design considerations by which the degree of correlation/decorrelation can be efficiently handled at the arithmetic circuit level. As for complex sequential circuits, we review classical stochastic FSM schemes as well as new designs using the recently-proposed dynamic SC to reduce the length of a stochastic sequence to obtain computation results. These stochastic circuits are compared to traditional implementations in terms of efficiency and delay for various levels of accuracy to illustrate the ranges of values for which SC provides significant performance benefits.




ASIC Design of Nanoscale Artificial Neural Networks for Inference/Training by Floating-Point Arithmetic

Archivo Digital UPM
  • Niknia, Farzad
  • Wang, Ziheng
  • Liu, Shanshan
  • Reviriego Vasallo, Pedro
  • Louri, Ahmed
  • Lombardi, Fabrizio
Inference and on-chip training of Artificial Neural Networks (ANNs) are challenging computational processes for large datasets; hardware implementations are needed to accelerate this computation, while meeting metrics such as operating frequency, power dissipation and accuracy. In this article, a high-performance ASIC-based design is proposed to implement both forward and backward propagations of multi-layer perceptrons (MLPs) at the nanoscales. To attain a higher accuracy, floating-point arithmetic units for a multiply-and-accumulate (MAC) array are employed in the proposed design; moreover, a hybrid implementation scheme is utilized to achieve flexibility (for networks of different size) and comprehensively low hardware overhead. The proposed design is fully pipelined, and its performance is independent of network size, except for the number of cycles and latency. The efficiency of the proposed nanoscale MLP-based design for inference (as taking place over multiple steps) and training (due to the complex processing in backward propagation by eliminating many redundant calculations) is analyzed. Moreover, the impact of different floating-point precision formats on the final accuracy and hardware metrics under the same design constraints is studied. A comparative evaluation of the proposed MLP design for different datasets and floating-point precision formats is provided. Results show that compared to current schemes found in the technical literatures, the proposed design has the best operating frequency and accuracy with still good latency and energy dissipation.




Speed and Conversational Large Language Models: Not All Is About Tokens per Second

Archivo Digital UPM
  • Conde Díaz, Javier
  • González Saiz, Miguel
  • Reviriego Vasallo, Pedro
  • Gao, Zhen
  • Liu, Shanshan
  • Lombardi, Fabrizio
The speed of open-weights large language models (LLMs) and its dependency on the task at hand, when run on GPUs, is studied to present a comparative analysis of the speed of the most popular open LLMs.




Designing Metadata for the Use of Artificial Intelligence in Academia

Archivo Digital UPM
  • Conde Díaz, Javier
  • Martínez Ruiz, Gonzalo
  • Reviriego Vasallo, Pedro
  • Salvachúa Rodríguez, Joaquín
  • Hernández Gutiérrez, José Alberto
Academic writing is one of the most important tasks in Academia. The pressure to "publish or perish" drives researchers to use all the tools available to try to improve their papers and their impact. Generative artificial intelligence (AI) that can create content such as text, tables, and images can be used for many tasks in academic writing such as summarizing, translating, paraphrasing, data analysis, and presentation. Therefore, AI is expected to be widely used in academic writing in the near future and have a significant impact. Understanding this impact is far from trivial and needs to be carefully studied. The first step in doing so is to be able to identify papers written with the help of AI tools. This can be done by adding metadata on the use of AI in academic results. In this paper, we embark on an initial endeavor to devise such metadata and delineate the potential advantages that the inclusion of AI-related metadata in academic publications may bring forth.




How Stable is Stable Diffusion under Recursive InPainting (RIP)?

Archivo Digital UPM
  • Conde Díaz, Javier
  • González Saiz, Miguel
  • Martínez Ruiz, Gonzalo
  • Moral, Fernando
  • Merino Gómez, Elena
  • Reviriego Vasallo, Pedro
Generative Artificial Intelligence image models have achieved out-standing performance in text-to-image generation and other tasks, such as inpainting that completes images with missing fragments. The performance of inpainting can be accurately measured by tak-ing an image, removing some fragments, performing the inpainting to restore them, and comparing the results with the original image. Interestingly, inpainting can also be applied recursively, starting from an image, removing some parts, applying inpainting to recon-struct the image, and then starting the inpainting process again on the reconstructed image, and so forth. This process of recursively applying inpainting can lead to an image that is similar or com-pletely different from the original one, depending on the fragments that are removed and the ability of the model to reconstruct them. Intuitively, stability, understood as the capability to recover an im-age that is similar to the original one even after many recursive inpainting operations, is a desirable feature and can be used as an additional performance metric for inpainting. The concept of stability is also being studied in the context of recursive training of generative AI models with their own data. Recursive inpainting is an inference-only recursive process whose understanding may complement ongoing efforts to study the behavior of generative AI models under training recursion. In this paper, the impact of recur-sive inpainting is studied for one of the most widely used image models: Stable Diffusion. The results show that recursive inpaint-ing can lead to image collapse, so ending with a nonmeaningful image, and that the outcome depends on several factors such as the type of image, the size of the inpainting masks, and the number of iterations.




Detect and Replace: Efficient Soft Error Protection of FPGA-Based CNN Accelerators

Archivo Digital UPM
  • Gao, Zhen
  • Qi, Yanmao
  • Shi, Jinchang
  • Liu, Qiang
  • Ge, Guangjun
  • Wang, Yu
  • Reviriego Vasallo, Pedro
Convolutional Neural Networks (CNNs) are widely used in computer vision and natural language processing. Field Programmable Gate Arrays (FPGAs) are a popular accelerator for CNNs. However, FPGAs are prone to suffer soft errors, so the reliability of FPGA-based CNNs becomes a key problem when used in safety critical applications. The convolution module based on a Processing Element (PE) array is the most complex part of the accelerator, so it is the key for efficient protection. Coding based schemes have been proposed for efficient protection of the convolution module, where the processing of the PE array is modeled as parallel Matrix-Vector Multiplications (MVMs), and every wrong output would be concurrently detected and corrected. In this paper, we show that temporary errors affecting a small fraction of data in the feature maps will not degrade the CNN performance, so a more efficient protection scheme is proposed based on faulty PE Detection and Replace (DR). The DR scheme is implemented on a CNN accelerator based on Xilinx Zynq 7000 SoC, and fault injection experiments are performed to evaluate the performance of the proposed DR scheme. The results show that it can effectively improve the system reliability when suffering soft errors with a much lower overhead than current coding-based protection schemes.




Tracking Students’ Progress in Educational Escape Rooms Through a Sequence Analysis Inspired Dashboard

Archivo Digital UPM
  • López Pernas, Sonsoles
  • Gordillo Méndez, Aldo
  • Barra Arias, Enrique
  • Saqr, Mohammed
Learning analytics dashboards are the main vehicle for providing educators with a visual representation of data and insights related to teaching and learning. Recent research has found that the data visualizations provided by dashboards are often very basic and do not take advantage of the latest research advances to analyze and depict the learning process. In this article, we present a success story of how we adapted a visualization used for research purposes for its integration in a dashboard for its use by teachers in daily practice. Specifically, we described the process of transforming and integrating a static sequence analysis visualization into an interactive web visualization in a learning analytics dashboard for monitoring students’ temporal trajectories in educational escape rooms in real time.We interviewed teachers to find out how they made use of the dashboard and present a qualitative content analysis of their responses.




Modeling the Effect of SEUs on the Configuration Memory of SRAM-FPGA based CNN Accelerators

Archivo Digital UPM
  • Gao, Zhen
  • Feng, Jiaqi
  • Gao, Shihui
  • Liu, Qiang
  • Ge, Guangjun
  • Wang, Yu
  • Reviriego Vasallo, Pedro
Convolutional Neural Networks (CNNs) are widely used in computer vision applications. SRAM based Field Programmable Gate Arrays (SRAM-FPGAs) are popular for the acceleration of CNNs. Since SRAM-FPGAs are prone to soft errors, the reliability evaluation and efficient fault tolerance design become very important for the use of FPGA-based CNNs in safety critical scenarios. Hardware based fault injection is an effective approach for the reliability evaluation, and the results can provide valuable references for the fault tolerance design. However, the complexity of building a fault injection platform poses a big obstacle for researchers working on the fault tolerance design. To remove this obstacle, this paper first performs a complete reliability evaluation for errors on the configuration memory of the FPGA based CNN accelerators, and then studies the impact of errors on the output feature maps of each layer. Based on the statistical analysis, we propose several fault models for the effect of SEUs on the configuration memory of the FPGA based CNN accelerators, and build a software simulator based on the fault models. Experiments show that the evaluation results based on the software simulator are very close to those from the hardware fault injections. Therefore, the proposed fault models and simulator can facilitate the fault tolerance design and reliability evaluation of CNN accelerators.




Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans

Archivo Digital UPM
  • Reviriego Vasallo, Pedro
  • Conde Díaz, Javier
  • Merino Gómez, Elena
  • Martínez Ruiz, Gonzalo
  • Hernández Gutiérrez, José Alberto
The introduction of Artificial Intelligence (AI) generative language models such as GPT (Generative Pre-trained Transformer) and conversational tools such as ChatGPT has triggered a revolution that can transform how text is generated. This has many implications, for example, as AI-generated text becomes a significant fraction of the text, would this affect the language capabilities of readers and also the training of newer AI tools? Would it affect the evolution of languages? Focusing on one specific aspect of the language: words; will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical diversity? This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost. In this work, we perform an initial comparison of the vocabulary and lexical diversity of ChatGPT and humans when performing the same tasks. In more detail, two datasets containing the answers to different types of questions answered by ChatGPT and humans, and a third dataset in which ChatGPT paraphrases sentences and questions are used. The analysis shows that ChatGPT-3.5 tends to use fewer distinct words and lower diversity than humans while ChatGPT-4 has a similar lexical diversity as humans and in some cases even larger. These results are very preliminary and additional datasets and ChatGPT configurations have to be evaluated to extract more general conclusions. Therefore, further research is needed to understand how the use of ChatGPT and more broadly generative AI tools will affect the vocabulary and lexical diversity in different types of text and languages.




Establishing vocabulary tests as a benchmark for evaluating large language models

Archivo Digital UPM
  • Martínez Ruiz, Gonzalo
  • Conde Díaz, Javier
  • Merino Gómez, Elena
  • Bermúdez Margaretto, Beatriz
  • Hernández Gutiérrez, José Alberto
  • Reviriego Vasallo, Pedro
  • Brysbaert, Marc
Vocabulary tests, once a cornerstone of language modeling evaluation, have been largely overlooked in the current landscape of Large Language Models (LLMs) like Llama 2, Mistral, and GPT. While most LLM evaluation benchmarks focus on specific tasks or domain-specific knowledge, they often neglect the fundamental linguistic aspects of language understanding. In this paper, we advocate for the revival of vocabulary tests as a valuable tool for assessing LLM performance. We evaluate seven LLMs using two vocabulary test formats across two languages and uncover surprising gaps in their lexical knowledge. These findings shed light on the intricacies of LLM word representations, their learning mechanisms, and performance variations across models and languages. Moreover, the ability to automatically generate and perform vocabulary tests offers new opportunities to expand the approach and provide a more complete picture of LLMs’ language skills.




Datos sobre las 8 ediciones de la escape room sobre bases de datos

e-cienciaDatos, Repositorio de Datos del Consorcio Madroño
  • Barra Arias, Enrique
<p>Descripción del proyecto</p>
<p></p>
Este dataset incluye los datos generados por los participantes de las 8 ediciones de la escape room educativa sobre bases de datos. Los alumnos de 4 asignaturas distintas de la ETSI Telecomunicación participan en las escape rooms educativas y utilizan sus conocimientos de la asignatura para superar los retos planteados y avanzar.
<p></p>
<p> Descripción del dataset</p>
<p></p>
<p>El dataset incluye datos de learning analytics de las 8 ediciones de una escape room educativa sobre bases de datos realizada en 4 asignaturas distintas de la ETSI Telecomunicación. Universidad Politécnica de Madrid</p>
<p>plataforma en la que se realizan las escape rooms y datos de las encuestas pasadas.
<p></p>
<p>Fichero</p>
<p></p>
<p>Tiene 4 hojas:</p>
<p></p>
<p>- Dos con los logs que han sido generados por la plataforma en la que los alumnos participan. https://escape.dit.upm.es</p>
<p>- Una hoja con la encuesta ha sido realizada en Moodle</p>
<p>- Una hoja con los datos del pre y post-test han sido realizados en el espacio Moodle de las asignaturas.</p>




Stable Diffusion aprende de Sebastiano Serlio: dibujo de arquitectura con inteligencia artificial, Stable Diffusion Learns from Sebastiano Serlio: Architectural Drawing with Artificial Intelligence

RiuNet. Repositorio Institucional de la Universitat Politécnica de Valéncia
  • Merino-Gómez, Elena|||0000-0003-4129-4626
  • Moral Andrés, Fernando|||0000-0002-5511-8239
  • Querol, Blanca|||0009-0005-5295-6148
  • Reviriego Vasallo, Pedro|||0000-0003-2273-1341
[EN] The recent development of generative Artificial Intelligence (AI) tools capable of generating images from text sequences is creating opportunities in many disciplines. Architecture is no exception, and text-to-image generators can be used in graphic representation. However, in most cases, they are not capable of generating architectural drawings following the graphic style of specific authors. Very recently, AI tools offer users the possibility of partial retraining using a small set of images. This opens the possibility of developing custom text-to-image generators with AI. For example, it would be possible to recreate the way a particular author represents architectures. In this article, we explore the potential of these custom generators using the works of Sebastiano Serlio as a case study. The results show that custom AI generators can capture Serlio s style, opening the research field to in-depth studies on the idiosyncratic modes of graphic expression in architectures throughout history. In this article, we explore the potential of these custom generators using the works of Sebastiano Serlio as a case study. The results show that custom AI generators can capture Serlio s style, opening the research field to in-depth studies on the idiosyncratic modes of graphic expression in architecture throughout history., [ES] El reciente desarrollo de herramientas de Inteligencia Artificial (IA) generativa capaces de generar imágenes a partir de secuencias de texto, está creando oportunidades en muchas disciplinas. La arquitectura no es una excepción y los generadores de texto a imágenes (text-to-image generators) se pueden emplear en la representación gráfica. Sin embargo, en la mayoría de los casos no son capaces de generar dibujos arquitectónicos siguiendo el estilo gráfico de autores concretos. Muy recientemente, las herramientas de IA ofrecen al usuario la posibilidad de realizar un reentrenamiento parcial utilizando un pequeño conjunto de imágenes. Esto abre la posibilidad de desarrollar generadores de texto a imágenes con IA personalizados. Sería posible, por ejemplo, recrear la forma de representar arquitecturas de un autor determinado. En este artículo exploramos el potencial de estos generadores personalizados utilizando los trabajos de Sebastiano Serlio como estudio de caso. Los resultados muestran que los generadores de IA personalizados pueden capturar el estilo de Serlio, abriendo el campo de investigación a estudios en profundidad sobre los modos idiosincráticos de la expresión gráfica de arquitecturas a lo largo de la historia., Este trabajo ha sido posible en parte gracias al proyecto FUN4DATE (PID2022-136684OB-C22)
financiado por la Agencia Estatal de Investigación (doi: 10.13039/501100011033).