Search within your images using natural language. This application allows you to upload images, automatically analyze them, and later search through them using conversational queries.
The usual image search available has a limitation—you can't search for a specific moment you remember from thousands of images. However, with this approach, it becomes possible to search even for small details. Here are a few examples of queries you can make:
- Hiking on a mountain with cows on the path
- When I was fat
- Visiting a haunted place
- 🖼️ Upload and store images using Cloudinary
- 🔍 Search through images using natural language queries
- 🧠 Vector embeddings for efficient image search
- 🤖 Powered by Llama 3.2 11B model via Groq
- 📊 Vector storage via Upstash
- ⚡ Fast and responsive Next.js frontend
- Frontend: Next.js with TailwindCSS
- Image Storage: Cloudinary
- AI Models:
- OpenAI for vector embeddings
- Llama 3.2 11B via Groq for natural language processing
- Vector Database: Upstash Vector Database
Deploy directly to Vercel with all required environment variables:
- Create a free account at Cloudinary
- Navigate to the Dashboard
- Copy your Cloud Name, API Key, and API Secret
- Create an unsigned upload preset in Settings > Upload > Upload presets
- Add these values to your .env.local file:
CLOUDINARY_CLOUD_NAME=your_cloud_name
CLOUDINARY_API_KEY=your_api_key
CLOUDINARY_API_SECRET=your_api_secret
NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME=your_cloud_name
OpenAI API (for embeddings)
- Sign up at OpenAI
- Create a new secret key
- Add to your .env.local file:
OPENAI_API_KEY=sk-your_openai_api_key
Groq API (for Llama 3.2 11B model)
- Sign up at Groq
- Generate an API key
- Add to your .env.local file:
GROQ_API_KEY=your_groq_api_key
- Sign up at Upstash
- Create a new Vector database
- Get the REST API URL and Token
- Add to your .env.local file:
UPSTASH_URL=your_upstash_url
UPSTASH_TOKEN=your_upstash_token
Control whether uploads are enabled:
NEXT_PUBLIC_UPLOAD_DISABLED=false
- Image Upload: Images are uploaded to Cloudinary for storage.
- Image Analysis: Each image is analyzed using LLaMA 3.2 Vision, generating descriptive text and vector embeddings via OpenAI.
- Vector Storage: Embeddings are stored in the Upstash Vector Database.
- Natural Language Search: User queries are converted to vectors using
text-embedding-3-small. - Vector Matching: The system finds relevant images based on vector similarity.
- Image Display: Matched images are displayed with their similarity scores.
- Clone the repository
git clone https://github.com/yourusername/memory-in-images.git
cd memory-in-images- Install dependencies
npm install
# or
yarn install
# or
pnpm install-
Create a
.env.localfile with all the required environment variables -
Run the development server
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev- Open http://localhost:3000 with your browser to see the result
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues during setup or deployment:
- Email: [email protected]
- Create an issue in this repository
Check out the live demo to see the application in action.
Built with ❤️ by Pushkar Yadav





