A Grafana dashboard that visualizes revenue data from Google BigQuery, featuring time series analysis, growth rates, and revenue distribution across market segments.
- Google Cloud Platform account with BigQuery enabled
- Docker and Docker Compose installed
- Python 3.x with
uvpackage manager gcloudCLI configured
First, authenticate with your Google Cloud account:
gcloud auth application-default loginThis creates credentials at ~/.config/gcloud/application_default_credentials.json.
Run the setup script to generate fake revenue data and create the BigQuery table:
uv run setup_bq.pyThis will:
- Create a dataset called
revenue_dashboardin your project - Generate 1000 rows of fake revenue data spanning the last 365 days
- Upload the data to the
revenue_growthtable
Note: If needed, update PROJECT_ID in setup_bq.py to match your GCP project.
Create a .gcp directory and copy your credentials with proper permissions:
mkdir -p .gcp
cp ~/.config/gcloud/application_default_credentials.json .gcp/credentials.json
chmod 644 .gcp/credentials.jsonThis allows the Grafana Docker container to read your GCP credentials.
Start the Grafana instance using Docker Compose:
docker compose up -dWait about 30 seconds for Grafana to fully start and install the BigQuery plugin.
- Open your browser and go to http://localhost:3000
- Login with default credentials:
- Username:
admin - Password:
admin
- Username:
- Navigate to Dashboards → Browse
- Click on Revenue Growth Dashboard
The dashboard should now display data from BigQuery!
- Revenue by Market Segment: Time series chart showing revenue trends over time, grouped by market segment (Enterprise, SMB, Consumer, Government, Education)
- Average Growth Rate by Segment: Bar gauge displaying the average growth rate for each market segment
- Revenue Share by Segment: Pie chart showing the distribution of total revenue across segments
- Default: Last 1 year
- Adjustable using the time picker in the top-right corner
┌─────────────────┐
│ Grafana │
│ (Docker) │
└────────┬────────┘
│
├─ BigQuery Plugin
│
├─ Datasource: BigQuery
│ └─ Auth: User Credentials
│
└─ Dashboard: dashboard.json
│
▼
┌─────────────────────────────┐
│ Google BigQuery │
│ │
│ Dataset: revenue_dashboard │
│ Table: revenue_growth │
│ - date (DATE) │
│ - market_segment (STRING)│
│ - revenue (FLOAT) │
│ - growth_rate (FLOAT) │
│ - region (STRING) │
└─────────────────────────────┘
-
Check BigQuery table: Verify data exists
bq query --use_legacy_sql=false "SELECT COUNT(*) FROM \`dw-genai-dev.revenue_dashboard.revenue_growth\`" -
Check Grafana logs:
docker compose logs grafana --tail=100
-
Verify credentials file: Ensure the credentials file is readable
docker exec vibe-grafana cat /etc/secrets/gcp-creds.json | head -c 100
If you see "missing authentication details" errors:
-
Ensure credentials file exists and is readable:
ls -la .gcp/credentials.json
-
Restart Grafana:
docker compose restart grafana
If you see BigQuery query errors in the logs, verify:
- The project ID matches your GCP project
- Your user account has BigQuery Data Viewer and Job User roles
- The dataset and table exist in BigQuery
To stop Grafana:
docker compose downTo completely remove Grafana data and start fresh:
docker compose down
docker volume ls | grep graf | awk '{print $2}' | xargs -r docker volume rmsetup_bq.py- Python script to generate and upload fake data to BigQuerydashboard.json- Grafana dashboard configurationdatasources.yaml- BigQuery datasource configurationdashboard_provider.yaml- Dashboard provisioning configurationdocker-compose.yaml- Docker Compose configuration for Grafana.gcp/credentials.json- GCP credentials for authentication (created during setup)
- The dashboard uses Grafana's
$__timeFilter()macro withTIMESTAMP(date)conversion for DATE columns - Authentication uses user credentials via the mounted credentials file
- The BigQuery datasource is automatically provisioned on Grafana startup
- Default time range is "Last 1 year" to show all generated data
This project was created using AI-assisted development. Below is the original prompt that generated this entire dashboard setup, demonstrating how natural language can be transformed into production-ready code.
Please create a dashboard to show which market segments are driving our revenue
growth this quarter. Please use a fake data source in BigQuery. The dashboard
framework is Grafana.
From this single prompt, the AI assistant autonomously created:
1. Data Pipeline (setup_bq.py)
- Generates 1000 rows of realistic revenue data across 5 market segments and 4 regions
- Creates BigQuery dataset and table with proper schema
- Handles authentication and error cases
2. Visualization Dashboard (dashboard.json)
- Three interconnected panels showing different views of the data
- Proper SQL queries with Grafana macros for dynamic time filtering
- Responsive layout with appropriate chart types for each metric
3. Infrastructure as Code
docker-compose.yaml- Containerized Grafana with BigQuery plugindatasources.yaml- Auto-provisioned BigQuery datasourcedashboard_provider.yaml- Automatic dashboard loading- Credentials mounting and permission handling
4. Complete Documentation
- Step-by-step setup instructions
- Troubleshooting guide with common issues
- Architecture diagram
- This README with all necessary context
During development, the AI assistant identified and resolved several technical issues:
-
Authentication Configuration
- Problem: Docker containers cannot access GCE metadata service
- Solution: Mount user credentials with proper permissions (644)
-
Query Compatibility
- Problem: Grafana's
$__timeFilter()macro expects TIMESTAMP, not DATE - Solution: Wrap DATE columns with
TIMESTAMP()function in queries
- Problem: Grafana's
-
Dashboard Format Codes
- Problem: BigQuery plugin v3.0.2+ uses numeric format codes
- Solution: Use
0for table format,1for time series format
-
Permissions in Docker
- Problem: Mounted credentials file not readable by Grafana container user
- Solution: Copy credentials to local directory with world-readable permissions
- With AI Assistant: ~1 hour (including troubleshooting)
- Traditional Development: Estimated 4-6 hours
This project demonstrates how AI-assisted development can:
- Rapidly prototype end-to-end data visualization solutions
- Generate production-ready code with proper error handling
- Create comprehensive documentation alongside code
- Iteratively debug and fix issues through conversation
- Significantly reduce time-to-value for dashboard projects