Glossary & Lexicon of Artificial Intelligence (AI)

glossary & lexicon & ai definition

Artificial intelligence is a field full of technical terms such as Prompt, generative AI and Deep Learning. At first glance, these terms may seem complex. But these words, often in English and regularly making the headlines, conceal cutting-edge concepts and technologies.

A glossary to simplify and understand AI

If terms such as Spatial Computing or Perceptron still seem obscure to you, don’t worry. In this glossary, we aim to demystify and clarify, in a concise and accessible way, the terms most frequently encountered in the world of artificial intelligence.

We will go through all the AI-related terms and give you the most precise definitions possible.

50 terms from our lexicon of AI:

  • AI Ethics: Issues that AI stakeholders, such as engineers and government officials, need to consider to ensure that the technology is developed and used responsibly.
  • Algorithm: Sequence of rules given to an AI machine to perform a task or solve a problem.
  • Application Programming Interface (API): Set of rules and protocols enabling different software programs to communicate and exchange information. Acts as an intermediary to facilitate interaction between programmes using different technologies.
  • Artificial Intelligence (AI): Intelligence demonstrated by machines to perform tasks that normally require human intelligence, such as learning, problem solving and understanding language.
  • Artificial neuron: A mechanism with multiple inputs and a single output simulating certain characteristics of the biological neuron.
  • Autonomous Agent: An agent who decides on his own actions and conditions without external intervention.
  • Big Data: Reference to large sets of data that can be studied to reveal patterns and trends to aid business decisions.
  • Compute Unified Device Architecture (CUDA): Method enabling computers to solve complex problems by breaking them up for simultaneous processing, often using GPUs.
  • Computer Vision: An interdisciplinary field of science and technology focused on computer understanding of images and videos.
  • Chatbot: Software application designed to imitate human conversation via text or voice commands.
  • Cognitive Computing: A computer model based on imitating human thought processes such as pattern recognition and learning.
  • Data Mining: The process of sorting large datasets to identify patterns that can improve models or solve problems.
  • Data Processing: Process of preparing raw data for use in a machine learning model, including data cleaning and transformation.
  • Data Science: Interdisciplinary field of technology using algorithms and processes to collect and analyse large quantities of data in order to discover patterns and information.
  • Deep Learning (DL): A sub-field of machine learning that uses deep neural networks to learn complex patterns from large amounts of data.
  • Embedding: A technique for representing words in numbers so that computers can process language, using algorithms that analyse words in their context.
  • Emergent Behavior: Unpredictable or unintended behaviour displayed by an AI system.
  • Feature Engineering: The process of selecting and creating new features from raw data to improve the performance of machine learning models.
  • Generative Adversarial Network (GAN): Computer programme that creates new data, such as images or music, by training two opposing neural networks.
  • Generative AI: A type of technology that uses AI to create content, including text, video, code and images.
  • Generative Art: Art created using computer programmes or algorithms that generate visual or auditory outputs, often based on random or mathematical rules.
  • Generative Pre-trained Transformer (GPT): Language model developed by OpenAI, used to generate text similar to that produced by a human.
  • Giant Language model Test Room (GLTR): Tool for determining whether a text has been written by an AI, by analysing the use of words.
  • Graphics Processing Unit (GPU): Computer component specialising in complex calculations for displaying images and videos.
  • Guardrails: Restrictions and rules imposed on AI systems to ensure that they manage data appropriately and do not generate unethical content.
  • Hallucination: Incorrect response from an AI system or false information presented as factual in an output.
  • Hyperparameter: A parameter affecting the way an AI model learns, usually set manually outside the model.
  • LangChain: Library for connecting AI models to external information sources to create agents or chatbots.
  • Langage Naturel: Language that sounds like human language: fluent and natural.
  • Large Language Model (LLM): Machine learning model trained on large amounts of text data, capable of generating natural text.
  • Modélisation: The process of producing a model of a virtual object, usually three-dimensional, using a computer language representation.
  • Natural Language Processing (NLP): A sub-field of AI that focuses on teaching machines to understand, process and generate human language.
  • Neural networks: A neural network is a programme made up of algorithms interconnected in the same way as the human brain. Neural networks simulate the way the human brain works.
  • Neural Radiance Fields (NeRF): Deep learning model used for tasks such as image generation, object detection and segmentation.
  • Overfitting: A common problem in machine learning, arising when the model is too complex and is unable to generalise on the basis of new data.
  • Perceptron: synthetic neuron model in which the signals received are first prioritised, then added together and finally converted into a single output signal using a mathematical formula.
  • Personal voice assistant: virtual assistant equipped with a voice recognition module and a speech synthesis system to recognise spoken instructions and act on them using its own synthesised voice.
  • Prompt: An AI prompt is a specific textual command that guides the reactions or generative productions of artificial intelligence.
  • Python: High-level programming language frequently used in AI tools, recognised for its simplicity and flexibility.
  • Raisonnement: method enabling a computerised system to carry out a logical sequence on the basis of initial propositions and a knowledge base in order to reach a conclusion.
  • Reinforcement Learning: A type of automatic learning in which the model learns by trial and error, receiving rewards or penalties for its actions and adjusting its behaviour accordingly.
  • Spatial Computing: Using technology to add digital information and experiences to the physical world, including augmented reality and virtual reality.
  • Stable Diffusion: Open source AI model for synthesising complex images from text, available for local installation or via several online user interfaces.
  • Supervised Learning: A type of machine learning where the training data is labelled and the model is trained to make predictions based on the relationships between the input data and the corresponding labels.
  • Temporal Coherence: Consistency and continuity of information or patterns over time, important in areas such as computer vision, natural language processing and time series analysis.
  • Turing test: A test in which a human being and a computer are placed in blind communication to determine whether they are capable of achieving the same levels of performance.
  • Unsupervised Learning: A type of machine learning where the training data is unlabelled and the model finds patterns and relationships in the data on its own..
  • Synthesized voice: voice or speech produced by a computer programme that faithfully reproduces the human voice.
  • Virtual assistant: a programme designed to answer questions in natural language or to perform tasks.
  • Webhook: Method enabling a computer programme to send messages or data to another programme via the Internet in real time.
➔ Access over 5,000 AI tools here