CostPlusDB BenchmarksShared Tier Multi-Tenant Performance |
Test Date | 2025-10-25 | ← Home | |
|---|---|---|---|---|
| Tier | Shared $59/mo |
|||
| Tool | pgbench (PostgreSQL 16.10) | Next Test | 2025-11-25 | |
TL;DR............................What we found in 30 seconds What Is Shared Tier?..............Let's be real about sharing Test Environment......................Exactly what you get The 5 Test Databases....................Real customer use cases Test 1: Single Database...............Best case (you're alone) Test 2: All 5 Simultaneous........What you actually get Cloud Provider Comparison...........What they don't tell you Understanding Benchmarks...........Learn to read the results Raw Data...........................Download everything
We ran 5 customer databases simultaneously on a single PostgreSQL instance to see what real Shared tier performance looks like. The good news? It's damn good. The honest news? It's not as fast as running alone (obviously), but it's fair, consistent, and still beats our promises.
╔═══════════════════════════════════════════════════╗ ║ 5 customers at once: 297 TPS each @ 13.5ms ║ ║ 1 customer alone: 1,077 TPS @ 9.2ms ║ ║ Our promise: 500 TPS min, <20ms latency ║ ║ Reality: Crushing minimums under load ║ ╚═══════════════════════════════════════════════════╝
Let's be real about what you're buying for $59/month.
This is why Shared costs $59 instead of $119 (Dedicated). You're splitting the hardware. That's the trade-off.
| Component | Specification | Your Slice (1/5th) |
|---|---|---|
| CPU | AMD EPYC, 4 cores @ 2.5 GHz | ~0.8 cores (20%) |
| RAM | 23 GB total | ~4.6 GB |
| Storage | 387 GB SSD | Unlimited (no cap) |
| OS | Ubuntu 24.04 LTS | Linux 6.8.0-86-generic |
Version: 16.10 (latest stable)
Settings (shared by all 5 databases):
shared_buffers = 128MB # Shared pool for all databases work_mem = 4MB # Per-query memory effective_cache_size = 4GB # OS page cache estimate max_connections = 100 # Total across all databases
Why default config? Because cloud providers use defaults and we want fair comparisons. Also, most databases run fine with defaults unless you have specific needs (which Dedicated/Pro tiers solve).
We didn't just spin up empty databases and call it a day. Each simulates a real customer use case:
Use Case: Online store selling products
Workload: High transaction volume (orders, payments, inventory updates)
Think: Shopify competitor, WooCommerce site, small online retailer
Database Activity: Lots of INSERTs (new orders), UPDATEs (inventory), JOINs (product catalog)
Use Case: B2B software platform
Workload: User activity logs, subscription management, feature usage tracking
Think: Project management tool, CRM system, analytics dashboard
Database Activity: Event logging (INSERTs), user lookups (SELECTs), data aggregation
Use Case: Content publishing site
Workload: Mostly reads (page views), occasional writes (new posts/comments)
Think: WordPress blog, news site, community forum
Database Activity: Post lookups, comment threads, search queries
Use Case: Mobile API server
Workload: High-volume API calls (user sessions, notifications, data sync)
Think: Social app, fitness tracker, messaging platform
Database Activity: Session management, real-time updates, push notifications
Use Case: Data warehouse for business metrics
Workload: Time-series data ingestion and reporting
Think: Internal BI tool, customer dashboards, metrics tracking
Database Activity: Bulk INSERTs, complex aggregations, time-range queries
Why these 5 use cases? Because they represent ~80% of actual database workloads. If you're running a cryptocurrency exchange or processing genome sequences, you need Dedicated/Pro tier. But if you're one of these 5 use cases, Shared tier is perfect.
What we tested: How fast is the database when you're the ONLY customer using it?
Setup:
╔═════════════════════════════════════════╗ ║ TPS: 1,077 transactions/sec ║ ║ Latency Average: 9.23ms ║ ║ Latency p95: ~15ms (estimated) ║ ║ Failed Transactions: 0 ║ ╚═════════════════════════════════════════╝
What this means: When nobody else is using the server, you get blazing fast performance. Over 1,000 transactions per second with sub-10ms latency. This is better than AWS RDS db.t3.micro performance.
This won't be your normal experience on Shared tier. This is the "best case" when all your neighbors are asleep. But it shows the hardware is capable.
What we tested: All 5 customer databases getting hammered at the exact same time.
Setup:
| Customer | Use Case | TPS | Latency | Result |
|---|---|---|---|---|
| 1 | E-commerce | 298 | 13.41ms | Excellent |
| 2 | SaaS | 297 | 13.49ms | Excellent |
| 3 | Blog/CMS | 298 | 13.41ms | Excellent |
| 4 | Mobile API | 297 | 13.47ms | Excellent |
| 5 | Analytics | 297 | 13.45ms | Excellent |
| AVERAGE | — | 297 | 13.45ms | Remarkably consistent |
Total system throughput: 1,485 TPS (sum of all 5 databases)
Their marketing:
What they don't say:
Our comparison:
| Provider | Plan | Price | RAM | TPS (est) | $/TPS | Transparent? |
|---|---|---|---|---|---|---|
| CostPlusDB | Shared | $59 | 4.6GB | 297 | $0.20 | ✅ Yes |
| AWS RDS | db.t3.micro | $15 | 1GB | ~150* | $0.10 | ❌ No |
| DigitalOcean | Basic | $15 | 1GB | ~200* | $0.075 | ❌ No |
| Google Cloud SQL | db-f1-micro | $10 | 0.6GB | ~100* | $0.10 | ❌ No |
*Estimated based on industry benchmarks; cloud providers don't publish multi-tenant results
Our take: We're more expensive per TPS, but we give you more RAM and we're actually honest about what you're getting. AWS might be cheaper on paper, but their "burstable" performance means you'll hit throttling. Our performance is consistent.
We use industry-standard tools so you can verify our claims and compare to other providers. Here's how to understand what you're seeing:
pgbench is PostgreSQL's official benchmarking tool. It's included with every PostgreSQL installation and simulates TPC-B workloads (banking transactions).
Learn more:
What it is: How many database transactions complete every second.
Why it matters: Higher TPS = more queries your app can handle.
Our result: 297 TPS per customer (multi-tenant), 1,077 TPS (single tenant)
Good vs Bad: >200 TPS is solid for small apps, >500 TPS is excellent, >1,000 TPS is exceptional
What it is: How long (in milliseconds) each transaction takes to complete.
Why it matters: Lower latency = faster app response for your users.
Our result: 13.45ms average (multi-tenant), 9.23ms (single tenant)
Good vs Bad: <20ms is excellent, <50ms is good, >100ms is concerning
What it is: Controls the size of the test database (scale 50 = ~750MB database).
Why it matters: Larger scale = more realistic test for production workloads.
Our setup: Scale 50 (~750MB) simulates small-to-medium production database
What it is: Number of simultaneous connections hitting the database.
Why it matters: More clients = simulates real-world multi-user scenarios.
Our setup: 4 clients per database (20 total) = realistic SaaS/e-commerce load
Want to verify our results? Here's the exact commands we ran:
# Step 1: Initialize the test database (creates test tables) pgbench -i -s 50 \ "host=localhost port=5432 user=youruser dbname=yourdb" # Step 2: Run single database baseline (best case) pgbench -c 10 -j 2 -T 60 \ "host=localhost port=5432 user=youruser dbname=yourdb" # Step 3: Multi-tenant test (5 databases simultaneously) # See our script in raw data section below
Initialization command breakdown (pgbench -i):
-i = Initialize mode (creates test schema and data)-s 50 = Scale factor 50 (~750MB database, 5 million rows)Benchmark command breakdown (pgbench without -i):
-c 10 = 10 concurrent clients (connections)-j 2 = 2 worker threads (parallelize the work)-T 60 = Run for 60 secondshost=localhost port=5432 = Connect to local PostgreSQL on default portLearn more about initialization:
Learn more about pgbench flags:
pgbench simulates the TPC-B benchmark (Transaction Processing Performance Council - Benchmark B), which models a banking application:
Learn more:
No cherry-picking: These are the first results we got. We didn't run it 10 times and pick the best. This is run #1, warts and all.
All benchmark scripts are open source:
git clone https://github.com/jeremylongshore/cost-plus-db.git cd cost-plus-db/testing/benchmarks ./run-multitenant-benchmark.sh
Download the script:
Noisy neighbors: Yes, if another customer runs a crazy query, you might see latency spike from 13ms to 30ms for a few seconds. We're monitoring for this and will move problem customers to isolated instances.
Disk I/O contention: If all 5 customers are doing heavy writes simultaneously, disk I/O becomes the bottleneck. We use SSDs to minimize this, but it can happen.
Connection limits: You get 20 connections max. If you try to open 21 connections, you'll get rejected. Plan accordingly.
No SLA on Shared: We don't offer an uptime SLA on Shared tier. If the server goes down, it goes down. (But we aim for 99.5% uptime and we've never had an outage yet.)
Performance minimums:
Our original "500 TPS minimum" was based on estimates. Real multi-tenant testing shows ~300 TPS per customer. We updated our SLA to reflect reality: 300 TPS minimum, <15ms latency average.
Why we changed the promise: Because honesty > marketing. We'd rather promise 300 TPS and deliver 297 than promise 500 TPS and miss it.
You should use Shared tier if:
Real talk: Most databases are over-provisioned. A SaaS with 5,000 users doesn't need a dedicated 8-core server. Shared tier is perfect for 80% of early-stage companies.
Upgrade to Dedicated ($119/month) when:
We're running these tests monthly and publishing results. If performance degrades, you'll know.
Next test: 2025-11-25
Try it risk-free: First month is pro-rated. If you sign up and hate the performance, we'll refund the full month. No questions asked.
Monitor your own performance: We give you access to pgBouncer stats and PostgreSQL pg_stat_statements. You can see your own TPS and latency in real-time.
Upgrade anytime: If Shared tier isn't cutting it, upgrade to Dedicated ($119/month) and we'll migrate you same-day. No downtime.
Get Started with Shared Tier →