New ! Search
Full Menu and site Navigation
A better world through a systems approach

AI Explorer: Large Language Model Applications

  • Date:
    Apr 6, 2023 - 10:30 AM - 11:45 AM
Large Language Model Applications

Date: Thursday, 6 April 2023
Time: 10:30 am - :11:45 am EDT

1. How Large Language Models Work: Next Word Prediction using Word Vectors, RNN, LSTM (Dr. Barclay Brown, Assoc. Dir AI Research, Collins Aerospace) 

LLMs like GPT-4 and chatGPT are popular in the press and getting lots of attention for their surprising abilities. But how do they work? Are they small minds that think like we do? Are they nearly sentient? In this segment we'll look closely at the fundamental concepts used to create LLMs, and break it down to fundamentals without using code or advanced math.

2. Diagram generation with Large Language Models (Ricardo Reis) 

As LLMs (think “chatGPT & similars”) are being released to the public, already experiments are afoot exploring these tools to accelerate the workflow in several areas. The experience of using LLM for diagram generation is shared in this short talk, along musings and questions on the implication of this type of usage.

3. LLMs: Need for Prudence (Dr. Ramakrishnan Raman, Fellow, Honeywell Aerospace) 

Abstract: As with any of the contemporary AI models, Large Language Models too have limitations. It is necessary for users to be cognizant of those limitations. For instance, uses may need to be aware of the  limitations due to the inherent bias and inaccuracies in the data used for training – since these aspects would have a significant bearing on the model responses. This brief session would highlight some of the typical limitations of large language models, and stress the need for discretion while being aware of associated implications.

Bio: Dr. Ramakrishnan Raman is currently Fellow at Honeywell Aerospace, and Co-Chairs the INCOSE AI Systems Working Group. His areas of interest includes complex systems, system-of-systems and AI-ML.

4. Applications of LLMs (Tom McDermott) 

More details: