This repository was created to test and explore Google's new "vibe coding" interface integrated into AI Studio. The interface allows developers to rapidly prototype AI-powered applications by describing their ideas in natural language, which Gemini then transforms into functional web applications.
The Body Language Analyst AI is a vision-based application that analyzes body language in photographs using Gemini's multimodal capabilities.
- Image Upload: Upload one or multiple photos (PNG, JPG, WEBP, etc.)
- Contextual Analysis: Optionally provide context about the relationships and scenario in the photo
- Expert-Level Analysis: Receives detailed body language analysis including:
- Facial expressions
- Eye contact and gaze direction
- Posture and positioning
- Gestures and hand movements
- Proxemics (spatial relationships)
- Overall emotional state and intentions
- Upload a photo containing people
- Optionally add context about the situation or relationships
- Click "Analyze Body Language"
- Receive a comprehensive expert analysis of the body language displayed in the image
The app was built using Google AI Studio's app builder interface, which streamlines the development process:
The AI Studio app builder homepage showing various AI-powered app templates
The initial concept: Using vision as the modality for proof of concept, with advanced tagging of people and expert body language analysis
The upload interface with context input field
The app analyzing the uploaded image
Detailed body language analysis results showing facial expressions, posture, gestures, and proxemics
This experiment demonstrates:
- The rapid prototyping capabilities of AI Studio's vibe coding interface
- Gemini's multimodal vision capabilities for analyzing visual content
- How natural language descriptions can be transformed into functional applications
- The potential for building specialized AI tools with minimal traditional coding
- Platform: Google AI Studio
- Model: Gemini 2.5 Pro
- Modality: Vision (image analysis)
- Interface: Web-based application generated through natural language description
To run this application locally:
- Clone the repository
- Install dependencies:
npm install
- Create a
.envfile based on.env.example:cp .env.example .env
- Add your Gemini API key to the
.envfile:- Get your API key from Google AI Studio
- Update
GEMINI_API_KEYin the.envfile
- Start the development server:
npm run dev
- Open your browser to
http://localhost:3000