Patrick Collison, CEO of Stripe, recently highlighted the increasingly interesting results observed when Large Language Models (LLMs) are tasked with plotting subjective concepts. His tweet, stating, "Asking LLMs to plot subjective things now yields quite interesting results. https://t.co/ojdWMnu0Rb," points to a significant advancement in AI's ability to interpret and visualize qualitative data. This observation underscores a growing trend in AI research and application, moving beyond purely objective data analysis to more nuanced interpretations of human experience and perception.
The ability of LLMs to handle subjective data marks a notable shift in their capabilities. Traditionally, qualitative data analysis, which involves interpreting personal feelings, opinions, and figurative meanings, has been a labor-intensive and human-centric process. Recent advancements, however, show LLMs are becoming adept at processing such information, as evidenced by a workflow that leverages LLMs for tasks like multilingual translation, text vectorization, clustering, and generating representative descriptions for qualitative data. This allows for the quantification and visualization of previously unquantifiable subjective inputs.
Experts in the field are exploring how LLMs, when provided with external libraries and custom functions, can generate code to perform complex data analysis and create plots based on natural language requests. This includes tasks such as quadratic fits and principal component analysis (PCA), which previously posed significant challenges for LLMs. The integration of tools like numeric.js for mathematical operations and Google Charts for visualization enables LLMs to not only analyze but also graphically represent data derived from subjective queries.
The progress in LLMs' subjective language understanding is also being systematically surveyed, covering tasks from sentiment analysis and emotion recognition to sarcasm detection and metaphor interpretation. These tasks inherently deal with personal perspectives and implicit meanings. While LLMs show strong capabilities in grasping contextual nuances, challenges remain in areas requiring deep cultural understanding, subtle irony, and highly ambiguous expressions. The ongoing research aims to refine these models to better align with human-like judgments and handle the inherent complexities of subjective human communication.