Skip to content
/ gpt Public

This is a Flask-based web application that provides API endpoints and a simple front-end interface for interacting with GPT models. Users can chat with AI, generate images, or get content summaries via a WeChat official account or web interface.

License

Notifications You must be signed in to change notification settings

ryanuo/gpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

75 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GPT Interaction and Application Platform

中文 English

Project Overview

This is a Flask-based web application that provides API endpoints and a simple front-end interface for interacting with GPT models. Users can chat with AI, generate images, or get content summaries via a WeChat official account or web interface.

Key Features

  • Text conversation with GPT models: Supports multi-turn conversations for human-like chat experiences.
  • Image generation from user input: Generate high-quality images using AI models.
  • WeChat official account message handling: Supports auto-replies, image generation, and other features.
  • Content summarization: Extract key information from long texts or web pages and generate concise summaries.
  • Custom AI service providers: Supports custom AI providers, including OpenAI-compatible APIs.

One-Click Deployment

Deploy to Vercel
Note: You need to configure the environment variables according to your actual usage.

Usage

Environment Setup

  1. Clone the repository:

    git clone https://github.com/ryanuo/gpt.git
    cd gpt
  2. Install dependencies:

    pip install -r requirements.txt
  3. Configure environment variables:

    cp .env.example .env
  4. Start the service:

    python -m api.index

Environment Variables

Variable Name Description Example Value Required Notes
WX_TOKEN WeChat official account token, used for message signature verification wechat_token_123456 Yes (for WeChat) Only needed if using WeChat message handling
OPENAI_API_KEY API key for GPT models sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx No Required when using a custom OpenAI-compatible API (PROVIDER=custom)
PROVIDER Specify AI service provider type custom No Default is empty (uses g4f client default). custom means using a custom API
COMPLETION_MODEL Default text generation model gpt-4o-mini, gpt-3.5-turbo No Defaults to default_model in api/config.py (gpt-4o-mini)
API_KEY Authentication key for custom AI service custom_api_key_789 No Required if PROVIDER=custom, similar to OPENAI_API_KEY
API_URL API endpoint URL for custom AI service (OpenAI-compatible) https://api.example.com/v1 No Required if PROVIDER=custom, defaults to OpenAI official API

Additional Notes

  • Environment variables should be configured in the project root .env file (referenced in .gitignore to prevent leaks).

  • Example setups:

    • Using default g4f service: no need to set PROVIDER, API_KEY, API_URL; optionally set COMPLETION_MODEL.
    • Using a custom API: set PROVIDER=custom, API_KEY, API_URL; optionally set COMPLETION_MODEL.

API Endpoints

API Type Path Method Parameters Response
Conversation /g4f/<model> POST - message: user input text
- context (optional): conversation context (format [{"role":"user","content":"xxx"},{"role":"assistant","content":"xxx"}])
Note: Model specified via <model> path; available models via /vmodels
JSON containing response field (model-generated reply)
Image Generation /generate-image POST - prompt: description of the image (e.g., "Cyberpunk city night scene")
Note: uses flux model
JSON containing image_url field (URL to access generated image)
WeChat Official Account /wechat POST Automatically parses WeChat server push messages (includes signature, timestamp, nonce, XML content, etc.)
Note: uses default model in api/config.py, gpt-4o-mini by default
Returns XML message reply according to WeChat specifications (text or rich media with image URLs)
Content Summarization /ai-post POST - url: URL of webpage to summarize (e.g., https://example.com/article)
Note: uses default model in api/config.py, gpt-4o-mini by default
JSON containing summary field (concise summary of key content, ~50-200 words)
Model List /vmodels GET None
Note: returns only supported model list, no model invocation
JSON grouped by type, e.g.:
{ "GPT Series": ["gpt-4o-mini", "gpt-3.5-turbo"], "Other Models": ["claude-3-haiku", "gemini-pro"] }

References

License

This project is licensed under the MIT License. See LICENSE for details.

About

This is a Flask-based web application that provides API endpoints and a simple front-end interface for interacting with GPT models. Users can chat with AI, generate images, or get content summaries via a WeChat official account or web interface.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors