A comprehensive database of 50+ popular open source AI models with a powerful REST API and beautiful web interface.
- ποΈ Comprehensive Database: 50+ AI models across language, vision, audio, multimodal, and embedding categories
- π Advanced Filtering: Search by category, provider, capabilities, modalities, and more
- π Rich Metadata: Technical specs, pricing, use cases, and compatibility information
- π REST API: Full-featured API with filtering, pagination, and search
- β‘ Vercel AI SDK Integration: Ready-to-use endpoints for AI applications
- π± Responsive UI: Beautiful web interface that works on all devices
- π Zero Setup: Deploy to Railway with one click
-
Clone the repository
git clone git@github.com:unicodeveloper/oss-aimodels.git cd oss-aimodels
-
Install dependencies
npm install
-
Start the development server
npm run dev
-
Access the application
- Web Interface: http://localhost:3000
- API: http://localhost:3000/api
- API Health: http://localhost:3000/api/health
# Install Railway CLI
npm i -g @railway/cli
# Initialize your project
railway init
# Deploy
railway up
# Build the image
docker build -t oss-ai-models .
# Run the container
docker run -p 3000:3000 oss-ai-models
https://aimodels.up.railway.app/api
curl "https://aimodels.up.railway.app/api/models"
# Get language models from Google
curl "https://aimodels.up.railway.app/api/models?category=language&provider=Google"
# Search for image generation models
curl "https://aimodels.up.railway.app/api/models?search=image&outputModality=Image"
# Get models with tool calling support
curl "https://aimodels.up.railway.app/api/models?toolCalling=Yes&limit=10"
curl "https://aimodels.up.railway.app/api/models/Alpaca"
curl "https://aimodels.up.railway.app/api/stats"
// Fetch all language models
const response = await fetch('https://aimodels.up.railway.app/api/models?category=language');
const { data } = await response.json();
// Search for specific capabilities
const codingModels = await fetch('https://aimodels.up.railway.app/api/models?search=coding&toolCalling=Yes');
const results = await codingModels.json();
import requests
# Get vision models
response = requests.get('https://aimodels.up.railway.app/api/models', params={
'category': 'vision',
'provider': 'Meta',
'limit': 5
})
models = response.json()['data']
// Use with Vercel AI SDK for model discovery
async function findBestModel(requirements: string) {
const response = await fetch('https://aimodels.up.railway.app/api/models', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
query: requirements,
filters: { toolCalling: 'Yes' },
limit: 5
})
});
const { models } = await response.json();
return models[0]; // Return the best match
}
βββ data/
β βββ models.js # Models database
βββ server.js # Express.js API server
βββ index.html # Frontend interface
βββ styles.css # Styling
βββ script.js # Frontend JavaScript
βββ package.json # Dependencies
βββ API.md # Full API documentation
βββ README.md # This file
Each model includes:
interface AIModel {
name: string;
author: string;
provider: string;
category: 'language' | 'vision' | 'audio' | 'multimodal' | 'embedding';
description: string;
plainDescription: string;
useCases: string[];
inputModalities: string[];
outputModalities: string[];
technicalSpecs: {
parameters: string;
memoryRequired: string;
hardwareRequirement: string;
inferenceSpeed: string;
formats: string[];
toolCalling: 'Yes' | 'No' | 'Limited';
reasoning: 'Strong' | 'Good' | 'Basic' | 'None';
inputCost: string;
outputCost: string;
};
license: string;
downloads: string;
stars: string;
date: string;
tags: string[];
githubUrl: string;
huggingtfaceUrl: string;
}
- LLaMA 2 - Meta's powerful conversational AI
- GPT-J - EleutherAI's open alternative to GPT-3
- T5 - Google's versatile text-to-text transformer
- Falcon - Technology Innovation Institute's efficient LLM
- Stable Diffusion XL - High-quality text-to-image generation
- YOLOv8 - Real-time object detection
- SAM - Segment anything model
- Vision Transformer (ViT) - Revolutionary image classification
- Whisper - OpenAI's speech recognition
- MusicGen - Meta's music generation
- Bark - Suno AI's creative text-to-audio
- CLIP - OpenAI's vision-language model
- LLaVA - Large language and vision assistant
- BLIP-2 - Salesforce's advanced multimodal AI
- Categories: language, vision, audio, multimodal, embedding
- Providers: Google, Meta, OpenAI, Microsoft, Hugging Face, and 20+ more
- Capabilities: Tool calling, reasoning levels, modalities
- Technical: Parameters, memory requirements, inference speed
- Licensing: MIT, Apache-2.0, GPL-3.0, Custom
- Free Text Search: Names, descriptions, tags, use cases
- RESTful Design: Clean, predictable endpoints
- Advanced Filtering: Multiple filter combinations
- Pagination: Efficient data loading
- Search: Full-text search across all fields
- Sorting: Multiple sort options
- Rate Limiting: 1000 requests per 15 minutes
- CORS Enabled: Works with browser applications
- Error Handling: Comprehensive error responses
- Statistics: Database insights and metrics
- Response Time: < 100ms for most queries
- Database Size: 50 models, ~2MB total
- Caching: Intelligent caching for optimal performance
- CDN Ready: Optimized for global distribution
Contributions are welcome! Here's how you can help:
- Add Models: Submit new open source AI models
- Improve Data: Enhance model descriptions and metadata
- Fix Bugs: Report and fix issues
- Documentation: Improve guides and examples
MIT License - feel free to use this in your own projects!
- API Documentation: API.md
- Live Demo: https://aimodels.up.railway.app
- API Health: https://aimodels.up.railway.app/api/health
- Statistics: https://aimodels.up.railway.app/api/stats
- Issues: GitHub Issues
- API Status: Check
/api/health
endpoint - Rate Limits: Monitor response headers
Vibe Coded & Built with β€οΈ for the open source AI community