How Large Language Models (LLMs) Are Transforming Computer Vision

Large Language Models (LLMs) are revolutionizing the way we interact with computers and the world around us. Their ability to generate human-like text, summarize information, and answer complex questions has already reshaped industries from customer service to research and development. But what happens when LLMs step beyond text and enter the world of vision?

To truly understand the world, LLM-powered agents need to see. Vision-language models provide one way to achieve this, but interestingly, even text-only LLMs can exhibit multimodal reasoning when paired with prompting and tool use. In his recent talk, Jacob Marks sheds light on how LLMs are transforming computer vision and introduces several groundbreaking projects, including VisProg, ViperGPT, VoxelGPT, and HuggingGPT.

This article explores the core concepts discussed in the talk, covering LLMs, computer vision, multimodal learning, and the future of LLM-powered agents. We'll also dive into Jacob's experiences in building VoxelGPT, his lessons learned, and the exciting potential for domain-specific prompt engineering.

Introduction to Large Language Models (LLMs)

At the heart of LLMs is the idea of using vast datasets to train neural networks capable of understanding and generating language. These models, like GPT-3, GPT-4, and LLaMA, can read, generate, and "understand" text with human-like fluency.

Key Features of LLMs:

  • Unsupervised Pre-training: Trained on large datasets of diverse internet text.

  • Few-shot and Zero-shot Learning: Able to generalize to unseen tasks with little to no prior data.

  • Contextual Reasoning: Can maintain context over long conversations or documents.

However, while LLMs are great at understanding language, the real world is multimodal. Humans interact with text, images, audio, and even 3D environments. This is where the challenge lies: how do we bridge the gap between language and vision?

Understanding GPT-4

GPT-4 is one of the most advanced LLMs, often used as a general-purpose AI assistant. Unlike earlier models, GPT-4 can understand more context, handle larger inputs, and follow instructions more accurately. Some variations of GPT-4 also support multimodal inputs, meaning they can process images in addition to text.

While GPT-4 can understand images, most LLMs remain text-based. But as Jacob Marks reveals, even text-only models can play a significant role in computer vision when given the right tools and context through prompting.

What is Computer Vision?

Computer Vision (CV) is a field of artificial intelligence (AI) that enables machines to see, recognize, and interpret images and videos. It's the backbone of facial recognition, object detection, autonomous vehicles, and augmented reality (AR).

Unlike LLMs, which handle only text, computer vision models like ResNet, YOLO, and CLIP are trained on visual data. But what if LLMs could "see" without the need for dedicated vision models? This is where the concept of unimodal vs. multimodal models comes into play.

Unimodal Tasks vs. Multimodal Tasks

  • Unimodal Tasks: Involve just one type of input, like text-only or image-only tasks.

  • Multimodal Tasks: Require the AI to process multiple types of inputs simultaneously, like combining text and images for tasks like image captioning or visual question answering (VQA).

Most LLMs are unimodal, but with smart tool use and prompting, they can engage in multimodal reasoning. Instead of training a model to "see," we can have the LLM delegate visual tasks to pre-trained vision models like YOLO or CLIP. This is the essence of projects like ViperGPT and VoxelGPT.

Bridging the Modality Gap

There’s a significant gap between how LLMs (which handle language) and computer vision models (which handle images) operate. While CV models recognize pixels, LLMs deal with semantic meaning. How do we bridge this gap?

  1. Tool Use: LLMs don’t need to "see" directly. Instead, they can use tools like CLIP, YOLO, or ResNet to analyze images, and LLMs reason over the results.

  2. Prompt Engineering: By writing well-crafted prompts, we can guide LLMs to request visual data, ask clarifying questions, and reason logically.

  3. Multimodal Models: Models like GPT-4 (multimodal version) have the ability to process images and text at the same time, closing the modality gap completely.

Building Bridges with FiftyOne

FiftyOne is a widely-used open-source tool that bridges vision and LLMs. It provides datasets, annotations, and visualizations that simplify the process of linking text-based prompts with visual datasets.

By incorporating tools like FiftyOne, developers can create multimodal datasets that are accessible to LLMs. This bridge is critical for projects like VoxelGPT, where 3D voxel-based representations are connected to LLMs, allowing them to process and understand 3D visual inputs.

Key LLM-Centered Projects Transforming Computer Vision

1. VisProg

VisProg focuses on visual programming using LLMs. Instead of pre-training a model to recognize images, VisProg delegates tasks to external models. This approach leverages LLMs for logical reasoning and uses existing CV models to process images.

2. ViperGPT

ViperGPT takes this concept further by allowing end-to-end vision-and-language pipelines. ViperGPT enables a text-based prompt to generate a dynamic program that processes images. For example:

Prompt: "Count the number of red objects in the image."

ViperGPT creates a program that identifies objects, filters them by color, and counts them — all powered by a combination of LLM logic and CV tools.

3. VoxelGPT

Jacob Marks introduces VoxelGPT, which brings LLMs into the 3D space. Instead of just reasoning over 2D images, VoxelGPT can process 3D voxel-based representations. This development opens new possibilities in architecture, CAD, and 3D modeling.

4. HuggingGPT

HuggingGPT is a framework for integrating LLMs with models from the Hugging Face ecosystem. It allows LLMs to access a suite of CV, NLP, and ML tools. For example, a user can request:

Prompt: "Generate an image of a futuristic robot using DALL-E and write a 200-word description of it."

HuggingGPT coordinates the request, triggering DALL-E to create the image and GPT to generate the description.

How Agents Can Acquire New Skills

With the right tool use, API access, and multimodal reasoning, LLM-based agents can acquire new skills:

  • See: Access vision models like YOLO, CLIP, or Voxel-based models.

  • Hear: Process and transcribe audio using speech-to-text tools.

  • Act: Control external systems and tools via APIs.

Agents like AutoGPT already use LLMs to browse the web, send emails, and analyze files. By adding vision, agents will be able to interact with and understand the world like never before.

The Role of Humans in LLM and Computer Vision

Despite advancements in LLMs and computer vision, humans still play a key role in:

  • Prompt Engineering: Crafting clear, specific, and context-rich prompts.

  • Evaluation: Evaluating and correcting outputs from AI models.

  • Ethical Oversight: Preventing misuse, bias, and prompt injection.

As LLM-powered agents become more autonomous, humans will take on supervisory roles, focusing on guiding AI through high-level instructions rather than low-level control.

The Future of LLMs in Computer Vision

LLMs are no longer just text models — they are becoming multimodal agents. Projects like VoxelGPT and HuggingGPT demonstrate how LLMs can reason about and manipulate images, 3D models, and visual inputs. By combining vision and language, these agents are stepping into new industries like 3D design, robotics, AR, and CAD.

With future advancements, we may see agents that can:

  • Visualize and design 3D environments in real-time.

  • Generate multimodal creative content (text + images + 3D) simultaneously.

  • Autonomously navigate real-world environments using a combination of vision, reasoning, and action.

The fusion of LLMs and computer vision is a game-changer. From processing 2D images to reasoning about 3D voxel data, the future is bright for AI agents that can see, reason, and act.

Want to see it in action? Keep an eye on groundbreaking projects like VoxelGPT and ViperGPT — they’re shaping the future of AI as we know it.