Flask-SQLAlchemy + SQLAlchemy + Alembic Cheatsheet
Scope
A practical, copy-paste-friendly reference for modern Flask-SQLAlchemy 3.x + SQLAlchemy 2.x + Alembic.
This note focuses on:
- ORM thinking
- how Flask-SQLAlchemy fits on top of SQLAlchemy
- model and relationship behavior
- modern querying withselect()
- safe raw SQL and SQL injection avoidance
- schema changes with Alembic / Flask-Migrate
- production configuration and operational concerns
Sources
- SQLAlchemy 2.0 Tutorial: Working with Engines and Connections
- Flask-SQLAlchemy Documentation
- Alembic Tutorial
Table of Contents
- Stack Overview
- What This Stack Is
- Flask-SQLAlchemy Mental Model
- Core Objects
- Database URLs
- Setup and App Context
- Minimal Flask-SQLAlchemy Setup
- Application Context in Flask
- Models and Schema
- Declarative Mapping Basics
- Common SQLAlchemy Column Types
mapped_column()Reference- Foreign Keys Explained
- Relationships
- Relationships Refresher
relationship()Reference- Relationship Patterns
- Session Lifecycle and CRUD
- Object States in the ORM
- CRUD Quick Reference
- Transactions and Session Rules
- Querying and Loading
- Modern Querying with
select() - Filtering Cheat Sheet
- Result Method Cheat Sheet
- Joins, Aggregates, and Grouping
- Loading Related Data Efficiently
- Pagination with Flask-SQLAlchemy
- Flask-SQLAlchemy Convenience Helpers
- Regex Support
- Raw SQL, SQLite, and SQL Injection
- Executing Raw SQL Safely
- SQLite
PRAGMAExplained Simply - Avoiding SQL Injection
- Migrations
- Alembic Migrations Explained Simply
- Flask-Migrate vs Alembic
- Flask-Migrate / Alembic Commands
- Example Migration Files
- Production and Operations
- Production Best Practices
- Choosing the Right Database Backend
- Production Configuration Checklist
- Important Flask-SQLAlchemy Configuration Keys
- Engine and Pool Tuning
- Session and Transaction Discipline in Production
- SQLite Production Notes
- PostgreSQL Production Notes
- Production Config Example
- Production App Factory Example
- Deployment and Migration Workflow
- Performance and Query Hygiene
- Security Considerations
- Observability and Troubleshooting
- Backups and Recovery
- Background Jobs, Scripts, and CLI Tasks
- Pre-Launch Checklist
- Simple Production Rules of Thumb
- Pitfalls and Reference
- Common Mistakes
- Glossary
- Copy-Paste Starter Skeleton
- One-Line Memory Hooks
Stack Overview
What This Stack Is
SQLAlchemy
SQLAlchemy is the database toolkit and ORM.
It gives you two major styles:
- Core: lower-level SQL construction and execution
- ORM: Python classes mapped to database tables
Flask-SQLAlchemy
Flask-SQLAlchemy is a Flask integration layer on top of SQLAlchemy.
It gives you:
- a Flask-aware
dbextension object - an app/request-context-managed session
- convenience helpers like
get_or_404()andpaginate()
Alembic
Alembic is the schema migration tool used to version and apply database changes over time.
Flask-Migrate
Flask-Migrate is a Flask CLI wrapper around Alembic.
> The common stack is:
- SQLAlchemy for DB access and ORM
- Flask-SQLAlchemy for Flask integration
- Alembic for migrations
- Flask-Migrate for convenient Flask CLI commands
Flask-SQLAlchemy Mental Model
A useful mental model is:
- Flask manages app and request lifecycle
- Flask-SQLAlchemy provides
db db.sessionis your request-scoped unit of work- models inherit from
db.Model - schema changes are tracked with Alembic migrations
In most route handlers you will:
- receive request input
- query with
db.session - modify ORM objects
commit()once- return a response
Core Objects
| Object | What it is | Think of it as |
|---|---|---|
Engine |
Talks to the DB driver and creates connections | how Python reaches the DB |
Connection |
A live DBAPI-level connection | a live wire to the database |
Session |
Tracks ORM objects, changes, flushes, and transactions | the ORM workspace |
Model |
A mapped Python class representing a table | a Python class for one table |
Engine
from sqlalchemy import create_engine
engine = create_engine("sqlite:///app.db")
Connection
from sqlalchemy import text
with engine.connect() as conn:
result = conn.execute(text("SELECT 1"))
Session
user = User(username="alice")
db.session.add(user)
db.session.commit()
Model
class User(db.Model):
__tablename__ = "users"
Database URLs
Common connection strings:
sqlite:///app.db
sqlite:////absolute/path/app.db
sqlite:///:memory:
postgresql+psycopg://user:password@localhost:5432/mydb
mysql+pymysql://user:password@localhost:3306/mydb
Notes:
sqlite:///app.db-> relative SQLite filesqlite:////tmp/app.db-> absolute SQLite pathsqlite:///:memory:-> in-memory DBpostgresql+psycopg://...-> PostgreSQL viapsycopg
Setup and App Context
Minimal Flask-SQLAlchemy Setup
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
db = SQLAlchemy(model_class=Base)
def create_app() -> Flask:
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///app.db"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
"pool_pre_ping": True,
}
db.init_app(app)
return app
Why this shape:
DeclarativeBaseis the SQLAlchemy 2.x typed base stylemodel_class=Basetells Flask-SQLAlchemy to use your custom base- app factory style keeps setup testable and avoids circular imports
Application Context in Flask
Flask-SQLAlchemy needs an application context for many operations outside a request.
app = create_app()
with app.app_context():
db.create_all()
Use app context when:
- running scripts
- using Flask shell
- seeding the database
- calling
db.create_all()in tiny experiments - querying outside request handlers
Models and Schema
Declarative Mapping Basics
Full example
from datetime import datetime, timezone
from sqlalchemy import String, Text, ForeignKey
from sqlalchemy.orm import Mapped, mapped_column, relationship
class User(db.Model):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
username: Mapped[str] = mapped_column(String(64), unique=True, index=True, nullable=False)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False)
is_admin: Mapped[bool] = mapped_column(default=False, nullable=False)
created_at: Mapped[datetime] = mapped_column(
default=lambda: datetime.now(timezone.utc),
nullable=False,
)
posts: Mapped[list["Post"]] = relationship(
back_populates="author",
cascade="all, delete-orphan",
)
class Post(db.Model):
__tablename__ = "posts"
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str] = mapped_column(String(200), nullable=False)
body: Mapped[str] = mapped_column(Text, nullable=False)
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=False, index=True)
author: Mapped["User"] = relationship(back_populates="posts")
db.Model
The base class your models inherit from.
Mapped[...]
A typed SQLAlchemy annotation meaning “this attribute is ORM-mapped and behaves like this Python type”.
Examples:
id: Mapped[int]
name: Mapped[str]
team_id: Mapped[int | None]
posts: Mapped[list["Post"]]
mapped_column(...)
Defines how the attribute is stored in the database.
username: Mapped[str] = mapped_column(String(64), unique=True, nullable=False)
Common SQLAlchemy Column Types
from sqlalchemy import Integer, String, Text, Boolean, DateTime, JSON, LargeBinary
| Type | Example | Use |
|---|---|---|
Integer |
mapped_column(Integer) |
whole numbers |
String(64) |
mapped_column(String(64)) |
short text |
Text |
mapped_column(Text) |
long text |
Boolean |
mapped_column(Boolean) |
true/false |
DateTime(timezone=True) |
mapped_column(DateTime(timezone=True)) |
timestamps |
JSON |
mapped_column(JSON) |
structured JSON |
LargeBinary |
mapped_column(LargeBinary) |
bytes / blobs |
Example:
from sqlalchemy import String, Boolean, Text
title: Mapped[str] = mapped_column(String(128), nullable=False)
description: Mapped[str] = mapped_column(Text, nullable=False)
is_hidden: Mapped[bool] = mapped_column(Boolean, default=False, nullable=False)
mapped_column() Reference
| Keyword | Meaning | Example | Notes |
|---|---|---|---|
primary_key=True |
Marks primary key | mapped_column(primary_key=True) |
usually one per table |
nullable=False |
Value is required | mapped_column(nullable=False) |
enforced by DB |
unique=True |
No duplicates allowed | mapped_column(unique=True) |
often also indexed |
index=True |
Adds index | mapped_column(index=True) |
speeds common lookups |
default=... |
Python-side default | mapped_column(default=True) |
applied by SQLAlchemy on insert |
server_default=... |
DB-side default | mapped_column(server_default=text("'new'")) |
applied by the DB |
ForeignKey(...) |
References another table | mapped_column(ForeignKey("users.id")) |
creates DB-level linkage |
onupdate=... |
Python-side update hook | mapped_column(onupdate=...) |
good for updated_at |
comment=... |
DB column comment | mapped_column(comment="...") |
backend-dependent |
autoincrement=True |
Auto-increment integer PK | mapped_column(primary_key=True, autoincrement=True) |
often implicit |
| custom column name | Python name differs from DB name | mapped_column("created_at", String(30)) |
use sparingly |
Examples:
id: Mapped[int] = mapped_column(primary_key=True)
email: Mapped[str] = mapped_column(unique=True, nullable=False)
score: Mapped[int] = mapped_column(index=True)
from sqlalchemy import text
created_label: Mapped[str] = mapped_column(server_default=text("'new'"))
from datetime import datetime, timezone
updated_at: Mapped[datetime] = mapped_column(
default=lambda: datetime.now(timezone.utc),
onupdate=lambda: datetime.now(timezone.utc),
)
Foreign Keys Explained
A foreign key says: “this column references a row in another table.”
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=False)
This means each posts.user_id must point to a valid users.id.
Foreign keys help with:
- referential integrity
- joins
- relationship mapping
- preventing invalid references
ondelete="CASCADE"
user_id: Mapped[int] = mapped_column(
ForeignKey("users.id", ondelete="CASCADE"),
nullable=False,
)
Common options:
CASCADE-> delete child rows tooSET NULL-> set FK toNULLRESTRICTor default behavior -> refuse delete if child rows exist
> ORM cascade= and DB ondelete= are related but not identical:
cascade=is ORM behaviorondelete=is database behavior
Relationships
Relationships Refresher
A relationship is the Python-side way to navigate rows linked by foreign keys.
Without relationship:
post.user_id
With relationship:
post.author.username
> relationship() does not create the foreign key.
You still need
mapped_column(ForeignKey(...))for actual DB linkage.
relationship() Reference
| Keyword | Meaning | Example | Notes |
|---|---|---|---|
back_populates |
Explicitly links both sides | relationship(back_populates="author") |
preferred for clarity |
backref |
Auto-creates reverse side | relationship("User", backref="posts") |
convenient, less explicit |
cascade="all, delete-orphan" |
Child lifecycle follows parent | relationship(..., cascade="all, delete-orphan") |
good for true owned children |
lazy="select" |
Load on access | relationship(lazy="select") |
default-style lazy loading |
lazy="joined" |
Eager load via join | relationship(lazy="joined") |
good for singular relations |
lazy="selectin" |
Eager load with separate IN (...) query |
relationship(lazy="selectin") |
often best for collections |
lazy="dynamic" |
Legacy query-producing collection | relationship(lazy="dynamic") |
less common in typed 2.x style |
secondary=... |
Many-to-many association table | relationship(secondary=post_tags) |
for many-to-many |
uselist=False |
Treat as one-to-one in ORM | relationship(uselist=False) |
pair with DB uniqueness |
order_by=... |
Default collection ordering | relationship(order_by="Post.id.desc()") |
string or expression |
foreign_keys=[...] |
Disambiguates multiple FKs | relationship(foreign_keys=[captain_id]) |
useful with self/multi-FK tables |
passive_deletes=True |
Rely more on DB delete behavior | relationship(passive_deletes=True) |
useful with DB cascades |
Relationship Patterns
One-to-many
class User(db.Model):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
posts: Mapped[list["Post"]] = relationship(back_populates="author")
class Post(db.Model):
__tablename__ = "posts"
id: Mapped[int] = mapped_column(primary_key=True)
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=False)
author: Mapped["User"] = relationship(back_populates="posts")
One user has many posts.
Many-to-one
From the child side:
post.author
Many posts can point to one user.
One-to-one
class User(db.Model):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
profile: Mapped["Profile"] = relationship(back_populates="user", uselist=False)
class Profile(db.Model):
__tablename__ = "profiles"
id: Mapped[int] = mapped_column(primary_key=True)
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), unique=True, nullable=False)
user: Mapped["User"] = relationship(back_populates="profile")
unique=True on user_id is what makes it one-to-one at the DB level.
Many-to-many
from sqlalchemy import Table, Column, ForeignKey
post_tags = Table(
"post_tags",
db.metadata,
Column("post_id", ForeignKey("posts.id"), primary_key=True),
Column("tag_id", ForeignKey("tags.id"), primary_key=True),
)
class Post(db.Model):
__tablename__ = "posts"
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str] = mapped_column(nullable=False)
tags: Mapped[list["Tag"]] = relationship(
secondary=post_tags,
back_populates="posts",
)
class Tag(db.Model):
__tablename__ = "tags"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(unique=True, nullable=False)
posts: Mapped[list["Post"]] = relationship(
secondary=post_tags,
back_populates="tags",
)
Self-referential relationship
class Category(db.Model):
__tablename__ = "categories"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(nullable=False)
parent_id: Mapped[int | None] = mapped_column(ForeignKey("categories.id"))
parent: Mapped["Category | None"] = relationship(
remote_side=[id],
back_populates="children",
)
children: Mapped[list["Category"]] = relationship(back_populates="parent")
remote_side tells SQLAlchemy how to distinguish parent from child in a self-join.
Session Lifecycle and CRUD
Object States in the ORM
| State | Meaning | Example |
|---|---|---|
| Transient | new object, not in session | user = User(username="alice") |
| Pending | added to session, not flushed/committed yet | db.session.add(user) |
| Persistent | stored in DB and tracked by session | after flush/commit |
| Deleted | marked for deletion | db.session.delete(user) |
| Detached | object no longer attached to live session | after session ends or object is expunged |
Detached objects often surprise people when lazy-loaded relationships are accessed later.
CRUD Quick Reference
Create one row
user = User(username="alice", email="alice@example.com")
db.session.add(user)
db.session.commit()
Create many rows
db.session.add_all([
User(username="alice", email="alice@example.com"),
User(username="bob", email="bob@example.com"),
])
db.session.commit()
Read by primary key
user = db.session.get(User, 1)
Update
user = db.session.get(User, 1)
user.email = "new@example.com"
db.session.commit()
Delete
user = db.session.get(User, 1)
db.session.delete(user)
db.session.commit()
flush() vs commit()
user = User(username="alice", email="alice@example.com")
db.session.add(user)
db.session.flush()
print(user.id) # available before full commit
db.session.commit()
flush()sends pending SQL to the DB nowcommit()permanently ends the current transaction
Use flush() when you need generated IDs inside the same transaction.
refresh()
db.session.refresh(user)
Reloads an object from the database.
Transactions and Session Rules
Basic write pattern
try:
user = User(username="alice", email="alice@example.com")
db.session.add(user)
db.session.commit()
except Exception:
db.session.rollback()
raise
Group multiple writes in one transaction
with db.session.begin():
user = User(username="alice", email="alice@example.com")
db.session.add(user)
post = Post(title="Hello", body="World", author=user)
db.session.add(post)
If any step fails, the transaction rolls back.
commit() vs rollback()
commit()applies changes permanentlyrollback()abandons the current failed transaction
Autoflush
The session may flush automatically before some queries.
If needed:
with db.session.no_autoflush:
...
Use sparingly.
Keep one transaction owner
A common design rule is: one route or one service owns the final commit().
In Flask
Flask-SQLAlchemy handles normal session cleanup with app/request context.
You usually do not manually create and dispose sessions for ordinary request handling.
Querying and Loading
Modern Querying with select()
Get all users
from sqlalchemy import select
stmt = select(User)
users = db.session.scalars(stmt).all()
Get one user by username
stmt = select(User).where(User.username == "alice")
user = db.session.scalar(stmt)
Order by newest first
stmt = select(User).order_by(User.id.desc())
users = db.session.scalars(stmt).all()
Limit rows
stmt = select(User).order_by(User.id.desc()).limit(10)
users = db.session.scalars(stmt).all()
filter_by() shortcut
stmt = select(User).filter_by(username="alice")
user = db.session.scalar(stmt)
filter_by() is fine for simple equality checks. where() is more explicit and flexible.
Select specific columns
stmt = select(User.id, User.username)
rows = db.session.execute(stmt).all()
These are row-like results, not User model instances.
Filtering Cheat Sheet
Equality
stmt = select(User).where(User.username == "alice")
Not equal
stmt = select(User).where(User.username != "alice")
Greater / less than
stmt = select(Post).where(Post.id > 100)
stmt = select(Post).where(Post.id <= 500)
IN (...)
stmt = select(User).where(User.id.in_([1, 2, 3]))
LIKE
stmt = select(User).where(User.username.like("ali%"))
ILIKE
stmt = select(User).where(User.username.ilike("%alice%"))
is_(None) and is_not(None)
stmt = select(User).where(User.team_id.is_(None))
stmt = select(User).where(User.team_id.is_not(None))
Use these instead of == None.
Boolean checks
stmt = select(User).where(User.is_admin.is_(True))
stmt = select(User).where(User.is_admin.is_(False))
Multiple conditions
stmt = select(User).where(
User.is_admin.is_(False),
User.email.ilike("%@example.com"),
)
or_()
from sqlalchemy import or_
stmt = select(User).where(
or_(User.username <mark> "alice", User.username </mark> "bob")
)
and_()
from sqlalchemy import and_
stmt = select(User).where(
and_(User.is_admin.is_(True), User.is_banned.is_(False))
)
between()
stmt = select(Post).where(Post.id.between(10, 20))
Result Method Cheat Sheet
| Method | Returns | Good for | Important note |
|---|---|---|---|
db.session.execute(stmt) |
Result of row objects |
multi-column selects, explicit row handling | rows are not ORM entities unless selected that way |
db.session.scalars(stmt) |
ScalarResult over first column |
ORM entities or single-column selects | common for select(User) |
db.session.scalar(stmt) |
first column of first row, or None |
“give me one thing or none” | returns first selected column, not always a full row |
.scalar_one() |
exactly one scalar | strict one-row expectation | raises if zero or many |
.scalar_one_or_none() |
zero or one scalar | optional unique row | raises if many |
.first() |
first row or None |
row-based access | returns row object, not scalar entity |
Examples:
result = db.session.execute(select(User.id, User.username))
rows = result.all()
users = db.session.scalars(select(User)).all()
user = db.session.scalar(select(User).where(User.username == "alice"))
user = db.session.execute(
select(User).where(User.email == "alice@example.com")
).scalar_one()
Joins, Aggregates, and Grouping
Simple join
stmt = (
select(Post)
.join(Post.author)
.where(User.username == "alice")
)
posts = db.session.scalars(stmt).all()
Count rows
from sqlalchemy import func
stmt = select(func.count()).select_from(User)
count = db.session.scalar(stmt)
Count posts per user
stmt = (
select(User.username, func.count(Post.id))
.join(Post, Post.user_id == User.id)
.group_by(User.id, User.username)
)
rows = db.session.execute(stmt).all()
distinct()
stmt = select(Post.user_id).distinct()
user_ids = db.session.scalars(stmt).all()
having()
stmt = (
select(User.username, func.count(Post.id).label("post_count"))
.join(Post, Post.user_id == User.id)
.group_by(User.id, User.username)
.having(func.count(Post.id) >= 5)
)
rows = db.session.execute(stmt).all()
Loading Related Data Efficiently
A major ORM performance issue is the N+1 query problem.
Bad pattern:
users = db.session.scalars(select(User)).all()
for user in users:
print(len(user.posts)) # may trigger one extra query per user
Use selectinload()
from sqlalchemy.orm import selectinload
stmt = select(User).options(selectinload(User.posts))
users = db.session.scalars(stmt).all()
Best for many collection relationships.
Use joinedload()
from sqlalchemy.orm import joinedload
stmt = select(Post).options(joinedload(Post.author))
posts = db.session.scalars(stmt).all()
Good for loading a single related object along with the main row.
joinedload() + collections
stmt = select(User).options(joinedload(User.posts))
users = db.session.execute(stmt).unique().scalars().all()
When joinedload() hits collections, .unique() is often required.
Rule of thumb:
- use
selectinload()for collections - use
joinedload()for singular relations - measure on hot paths
Pagination with Flask-SQLAlchemy
from sqlalchemy import select
page = db.paginate(
select(Post).order_by(Post.id.desc()),
page=1,
per_page=20,
)
Useful attributes:
page.items
page.page
page.pages
page.total
page.has_next
page.has_prev
page.next_num
page.prev_num
In a route:
from flask import request
page_obj = db.paginate(
select(Post).order_by(Post.id.desc()),
page=request.args.get("page", 1, type=int),
per_page=20,
max_per_page=100,
)
Flask-SQLAlchemy Convenience Helpers
user = db.get_or_404(User, 1)
post = db.one_or_404(select(Post).where(Post.id == 123))
Common helpers:
db.get_or_404(Model, pk)db.first_or_404(select(...))db.one_or_404(select(...))db.paginate(select(...))
Regex Support
Regex support depends on the database backend.
stmt = select(User).where(User.username.regexp_match(r"^[a-z0-9_]{3,32}$"))
users = db.session.scalars(stmt).all()
Caveats:
- PostgreSQL has its own regex operators
- MySQL / MariaDB have their own behavior
- SQLite does not provide regex by default unless you register a function
Prefer simpler operators when possible:
stmt = select(User).where(User.username.like("alice%"))
Raw SQL, SQLite, and SQL Injection
Executing Raw SQL Safely
Use raw SQL only when needed. Prefer ORM/Core expressions otherwise.
Safe text() query
from sqlalchemy import text
stmt = text("SELECT id, username FROM users WHERE email = :email")
rows = db.session.execute(stmt, {"email": "alice@example.com"}).all()
Safe insert
stmt = text(
"INSERT INTO audit_log (event_type, user_id) VALUES (:event_type, :user_id)"
)
db.session.execute(stmt, {"event_type": "login", "user_id": 1})
db.session.commit()
Low-level engine usage
from sqlalchemy import create_engine, text
engine = create_engine("sqlite:///app.db")
with engine.begin() as conn:
conn.execute(
text("INSERT INTO logs (message) VALUES (:msg)"),
{"msg": "hello"},
)
Reasonable cases for raw SQL:
- vendor-specific features
- complex reporting queries
- preserving existing SQL
- migration or data-fix scripts
SQLite PRAGMA Explained Simply
PRAGMA is SQLite-specific configuration SQL.
Common useful PRAGMAs:
PRAGMA foreign_keys = ON;
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA busy_timeout = 5000;
Meaning:
foreign_keys=ON-> enforce foreign keysjournal_mode=WAL-> write-ahead loggingsynchronous=NORMAL-> durability/performance balancebusy_timeout=5000-> wait on lock instead of failing immediately
Register PRAGMAs on SQLite connections
import sqlite3
from sqlalchemy import event
from sqlalchemy.engine import Engine
@event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, _connection_record):
if isinstance(dbapi_connection, sqlite3.Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.execute("PRAGMA journal_mode=WAL;")
cursor.execute("PRAGMA synchronous=NORMAL;")
cursor.execute("PRAGMA busy_timeout=5000;")
cursor.close()
SQLite is usually fine for:
- local development
- prototypes
- low-concurrency apps
- simple single-node deployments
Avoiding SQL Injection
Safe: ORM expressions
stmt = select(User).where(User.email == user_input_email)
user = db.session.scalar(stmt)
Safe: raw SQL with bound parameters
stmt = text("SELECT id, username FROM users WHERE email = :email")
rows = db.session.execute(stmt, {"email": user_input_email}).all()
Unsafe: string interpolation
# BAD
stmt = text(f"SELECT id, username FROM users WHERE email = '{user_input_email}'")
Unsafe: user input used as SQL structure
# BAD
sort_by = request.args["sort"]
stmt = text(f"SELECT * FROM users ORDER BY {sort_by}")
> Bound parameters protect values, not SQL structure like:
- column names
- table names
- keywords
- sort directions
Safe dynamic sorting with an allowlist
sort_map = {
"username": User.username,
"email": User.email,
"created_at": User.created_at,
}
sort_col = sort_map.get(request.args.get("sort"), User.created_at)
stmt = select(User).order_by(sort_col.desc())
users = db.session.scalars(stmt).all()
Safe dynamic direction
direction = request.args.get("direction", "desc")
direction = "desc" if direction not in {"asc", "desc"} else direction
order_expr = sort_col.desc() if direction == "desc" else sort_col.asc()
stmt = select(User).order_by(order_expr)
Extra anti-SQL injection rules:
- never use f-strings for SQL with user input
- never use
%formatting for SQL with user input - prefer ORM or bound parameters
- allowlist columns, tables, sort directions, and operators
- validate input even when using the ORM
Migrations
Alembic Migrations Explained Simply
A migration is a versioned schema change script.
Examples:
- create table
- add column
- remove column
- create index
- rename constraint
- transform existing data
Why migrations matter:
- teammates stay in sync
- staging and production stay in sync
- schema history is preserved
- rollback is deliberate instead of guesswork
db.create_all() vs Alembic
db.create_all() is fine for tiny throwaway projects and experiments.
It is not a schema history or deployment strategy.
Flask-Migrate vs Alembic
- Alembic is the migration engine
- Flask-Migrate integrates Alembic into Flask CLI commands like
flask db upgrade
Flask-Migrate / Alembic Commands
flask db init
flask db migrate -m "create users table"
flask db upgrade
flask db downgrade
flask db current
flask db history
flask db stamp head
| Command | Meaning |
|---|---|
flask db init |
create migrations environment |
flask db migrate -m "..." |
autogenerate a migration draft |
flask db upgrade |
apply migrations forward |
flask db downgrade |
move backward to an older revision |
flask db current |
show current DB revision |
flask db history |
show migration history |
flask db stamp head |
mark DB as current without running SQL |
> Always read autogenerated migrations before applying them.
Example Migration Files
Add a column
from alembic import op
import sqlalchemy as sa
revision = "123456789abc"
down_revision = "abcdef123456"
branch_labels = None
depends_on = None
def upgrade():
op.add_column(
"users",
sa.Column("is_admin", sa.Boolean(), nullable=False, server_default=sa.false()),
)
op.create_index(op.f("ix_users_is_admin"), "users", ["is_admin"], unique=False)
def downgrade():
op.drop_index(op.f("ix_users_is_admin"), table_name="users")
op.drop_column("users", "is_admin")
Create a new table
from alembic import op
import sqlalchemy as sa
revision = "newrev123"
down_revision = "oldrev456"
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
"tags",
sa.Column("id", sa.Integer(), primary_key=True),
sa.Column("name", sa.String(length=64), nullable=False, unique=True),
)
def downgrade():
op.drop_table("tags")
Data migration
from alembic import op
revision = "filldefaults001"
down_revision = "prev000"
branch_labels = None
depends_on = None
def upgrade():
op.execute("UPDATE users SET is_admin = 0 WHERE is_admin IS NULL")
def downgrade():
pass
Use data migrations carefully and test them on realistic data.
Production and Operations
Production Best Practices
- Use the app factory pattern
- Use one shared
dbextension object - Prefer
select()+db.session - Use migrations for every schema change
- Keep transactions short
- Use indexes intentionally
- Avoid N+1 queries
- Store timestamps in UTC
- Keep route handlers thin
- Roll back on DB write failures
- Configure engine health settings
- Prefer PostgreSQL for serious production workloads
- Do not expose raw DB errors to users
- Be careful with lazy loading outside request/session scope
- Review every autogenerated migration
Choosing the Right Database Backend
SQLite
SQLite is a file, not a server.
Good for:
- local development
- prototypes
- small single-host deployments
- low write concurrency
Weak for:
- many concurrent writes
- multiple app replicas
- network filesystems
- heavy live reporting
PostgreSQL
PostgreSQL is the default recommendation for serious production use.
Good for:
- concurrent traffic
- multi-worker / multi-instance deployments
- stronger locking behavior
- richer SQL features
- easier operational growth
Production Configuration Checklist
At minimum, production config should cover:
SQLALCHEMY_DATABASE_URISQLALCHEMY_TRACK_MODIFICATIONS = FalseSQLALCHEMY_ENGINE_OPTIONS- credentials from environment variables
- proper secret management
- migrations during deploy
- backup strategy
- logging and monitoring
- sensible connection pooling
- a deliberate DB backend choice
Important Flask-SQLAlchemy Configuration Keys
| Key | Purpose | Example |
|---|---|---|
SQLALCHEMY_DATABASE_URI |
main DB URL | "postgresql+psycopg://user:pass@db/app" |
SQLALCHEMY_TRACK_MODIFICATIONS |
event overhead toggle | False |
SQLALCHEMY_ENGINE_OPTIONS |
engine / pool settings | {"pool_pre_ping": True} |
SQLALCHEMY_ECHO |
SQL logging | False in normal production |
SQLALCHEMY_BINDS |
multiple DBs | {"analytics": "..."} |
Examples:
app.config["SQLALCHEMY_DATABASE_URI"] = "postgresql+psycopg://user:password@db/mydb"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_ECHO"] = False
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
"pool_pre_ping": True,
"pool_recycle": 1800,
}
Engine and Pool Tuning
These settings mostly matter for server DBs like PostgreSQL and MySQL.
| Option | What it does | Typical use |
|---|---|---|
pool_pre_ping=True |
checks connection health before use | protects against stale pooled connections |
pool_recycle=1800 |
recycles old pooled connections | useful with proxies / infra timeouts |
pool_size=10 |
persistent pool size | steady-state capacity |
max_overflow=20 |
burst capacity above pool size | traffic spikes |
pool_timeout=30 |
wait time for a free pooled connection | under load |
Example:
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
"pool_pre_ping": True,
"pool_recycle": 1800,
"pool_size": 10,
"max_overflow": 20,
"pool_timeout": 30,
}
Rough sizing rule:
- total possible connections ~= app workers x pool size, plus overflow
Session and Transaction Discipline in Production
Treat db.session as request-scoped
@bp.post("/users")
def create_user():
user = User(username="alice", email="alice@example.com")
db.session.add(user)
db.session.commit()
return "ok"
Roll back on errors
try:
db.session.add(user)
db.session.commit()
except Exception:
db.session.rollback()
raise
Keep transactions short
Good:
with db.session.begin():
db.session.add(user)
db.session.add(profile)
Bad pattern:
- open transaction
- call external APIs
- wait on the network
- keep the transaction open
One owner for final commit()
One route or service should own the transaction boundary.
Do not share ORM sessions across threads
Do not put db.session or live ORM objects into global state.
SQLite Production Notes
SQLite can be okay in production if the app is small and deployment is simple.
Good fit:
- one host
- one app instance
- light/moderate traffic
- small number of writes
Less good fit:
- multiple containers writing
- multiple app replicas
- live write contention
- heavy analytics during traffic
Useful PRAGMAs:
import sqlite3
from sqlalchemy import event
from sqlalchemy.engine import Engine
@event.listens_for(Engine, "connect")
def set_sqlite_pragma(dbapi_connection, _connection_record):
if isinstance(dbapi_connection, sqlite3.Connection):
cursor = dbapi_connection.cursor()
cursor.execute("PRAGMA foreign_keys=ON;")
cursor.execute("PRAGMA journal_mode=WAL;")
cursor.execute("PRAGMA synchronous=NORMAL;")
cursor.execute("PRAGMA busy_timeout=5000;")
cursor.close()
PostgreSQL Production Notes
A typical PostgreSQL URL:
postgresql+psycopg://appuser:strongpassword@db:5432/appdb
Strong reasons to use PostgreSQL:
- better concurrent write handling
- safer multi-worker deployment
- stronger indexing and query planning
- richer SQL support
- easier operational growth
Use a dedicated application DB user with least privilege.
Production Config Example
import os
class ProductionConfig:
DEBUG = False
TESTING = False
SQLALCHEMY_DATABASE_URI = os.environ["DATABASE_URL"]
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ECHO = False
SQLALCHEMY_ENGINE_OPTIONS = {
"pool_pre_ping": True,
"pool_recycle": 1800,
"pool_size": 10,
"max_overflow": 20,
"pool_timeout": 30,
}
Example environment variables:
export DATABASE_URL="postgresql+psycopg://appuser:strongpassword@db:5432/appdb"
export SECRET_KEY="replace-me"
For SQLite:
export DATABASE_URL="sqlite:////data/app.db"
Production App Factory Example
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
db = SQLAlchemy(model_class=Base)
def create_app():
app = Flask(__name__)
app.config.from_object("yourapp.config.ProductionConfig")
db.init_app(app)
return app
Deployment and Migration Workflow
Good workflow:
- change models
- generate migration
- review migration
- test migration on a non-production DB
- deploy code
- run migration
- verify app health
Typical commands:
flask db migrate -m "add team_uid to teams"
flask db upgrade
Never rely on db.create_all() in production deploys.
Before destructive changes:
- back up the database
- verify old schema is unused
- consider a multi-step deploy
Performance and Query Hygiene
Avoid N+1 queries
Bad:
users = db.session.scalars(select(User)).all()
for user in users:
print(len(user.posts))
Better:
from sqlalchemy.orm import selectinload
users = db.session.scalars(
select(User).options(selectinload(User.posts))
).all()
Add indexes where queries actually need them
Good candidates:
- usernames / emails
- foreign keys
- ordered timestamps
- frequently filtered score or status fields
Example:
email: Mapped[str] = mapped_column(unique=True, index=True, nullable=False)
team_id: Mapped[int | None] = mapped_column(ForeignKey("teams.id"), index=True)
score: Mapped[int] = mapped_column(index=True, default=0)
Paginate large result sets
page = db.paginate(
select(Post).order_by(Post.id.desc()),
page=1,
per_page=20,
)
Use eager loading intentionally
selectinload()is often best for collectionsjoinedload()is often best for singular related objects
Security Considerations
Keep DB credentials in environment variables
Good:
SQLALCHEMY_DATABASE_URI = os.environ["DATABASE_URL"]
Bad:
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg://produser:plaintextpassword@db/prod"
Parameterize raw SQL
Good:
stmt = text("SELECT * FROM users WHERE email = :email")
db.session.execute(stmt, {"email": email})
Bad:
stmt = text(f"SELECT * FROM users WHERE email = '{email}'")
Use allowlists for dynamic sorting / filtering
sort_map = {
"username": User.username,
"created_at": User.created_at,
}
sort_col = sort_map.get(sort_field, User.created_at)
stmt = select(User).order_by(sort_col.desc())
Do not leak raw DB errors to users
Log server-side details and return safe client-facing errors.
Observability and Troubleshooting
Useful things to log:
- request ID
- route name
- DB errors
- migration failures
- slow endpoints
- connection timeout errors
Temporary debugging knob:
app.config["SQLALCHEMY_ECHO"] = True
Simple health check:
from sqlalchemy import text
@app.get("/healthz")
def healthz():
try:
db.session.execute(text("SELECT 1"))
return {"status": "ok"}, 200
except Exception:
return {"status": "db_error"}, 500
Repeated DB issues often point to:
- stale connections ->
pool_pre_ping - too many app connections -> resize pool/workers or scale DB
- locking/contention -> query tuning or move off SQLite
- long requests -> N+1, missing indexes, slow migrations, oversized transactions
Backups and Recovery
Minimum backup plan:
- how often backups run
- where they are stored
- how long they are retained
- how to restore them
- when restore was last tested
SQLite note:
- back up carefully with WAL/file consistency in mind
PostgreSQL note:
- test restores, not just backup creation
Background Jobs, Scripts, and CLI Tasks
Background jobs still need app context and proper session handling.
from yourapp import create_app
from yourapp.extensions import db
from yourapp.models import User
app = create_app()
with app.app_context():
try:
user = User(username="cli-user", email="cli@example.com")
db.session.add(user)
db.session.commit()
except Exception:
db.session.rollback()
raise
Jobs should:
- open app context
- do work
- commit or rollback
- exit cleanly
Pre-Launch Checklist
Before shipping, confirm:
SQLALCHEMY_TRACK_MODIFICATIONSisFalse- DB credentials come from environment variables
- migrations are current
- migrations were tested before deploy
- indexes exist for hot query paths
- raw SQL uses bound parameters
- dynamic sort/filter fields are allowlisted
- health checks exist
- backup plan exists
- rollback plan exists
- pool settings match worker count and DB limits
- SQLite is only used if its limits are acceptable
- logs and monitoring are in place
Simple Production Rules of Thumb
- use PostgreSQL for serious production
- keep SQLite for simple/small/single-node cases
- keep transactions short
- commit once per unit of work
- always rollback after failed DB writes
- review every migration before applying it
- do not use
db.create_all()as a deployment strategy - index what you query often
- do not trust user input in raw SQL
- do not let pool size exceed what the DB can realistically handle
Pitfalls and Reference
Common Mistakes
- thinking
relationship()creates the foreign key - using
db.create_all()as a migration strategy - using raw SQL with f-strings
- forgetting
rollback()after failed DB writes - loading related rows one-by-one in loops
- assuming SQLite behaves like PostgreSQL
- overusing
backrefinstead of explicitback_populates - indexing everything without measuring read/write tradeoffs
Glossary
| Term | Meaning |
|---|---|
Engine |
factory for DB connections and low-level SQL execution |
Connection |
a live DB connection from an engine |
Session |
ORM unit of work and transaction manager |
db.Model |
base class for Flask-SQLAlchemy models |
DeclarativeBase |
SQLAlchemy 2.x typed declarative base |
Mapped[...] |
type annotation for ORM-mapped attributes |
mapped_column() |
modern column definition helper |
relationship() |
Python-side link between mapped classes |
ForeignKey |
DB-level reference from one table to another |
flush() |
send pending SQL without committing transaction |
commit() |
finish transaction permanently |
rollback() |
abandon current failed transaction |
select() |
modern SQLAlchemy query entry point |
text() |
wrapper for raw SQL |
joinedload() |
eager load related objects with a join |
selectinload() |
eager load related objects with separate IN (...) queries |
PRAGMA |
SQLite-specific config / inspection SQL |
migration |
versioned schema change script |
revision |
one migration step in Alembic history |
head |
latest migration revision |
autogenerate |
Alembic feature that drafts migration changes |
Copy-Paste Starter Skeleton
extensions.py
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
db = SQLAlchemy(model_class=Base)
models/user.py
from __future__ import annotations
from datetime import datetime, timezone
from sqlalchemy import String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from yourapp.extensions import db
class User(db.Model):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
username: Mapped[str] = mapped_column(String(64), unique=True, index=True, nullable=False)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False)
created_at: Mapped[datetime] = mapped_column(
default=lambda: datetime.now(timezone.utc),
nullable=False,
)
posts: Mapped[list["Post"]] = relationship(
back_populates="author",
cascade="all, delete-orphan",
)
models/post.py
from __future__ import annotations
from sqlalchemy import String, Text, ForeignKey
from sqlalchemy.orm import Mapped, mapped_column, relationship
from yourapp.extensions import db
class Post(db.Model):
__tablename__ = "posts"
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str] = mapped_column(String(200), nullable=False)
body: Mapped[str] = mapped_column(Text, nullable=False)
user_id: Mapped[int] = mapped_column(ForeignKey("users.id"), nullable=False, index=True)
author: Mapped["User"] = relationship(back_populates="posts")
app factory
from flask import Flask
from yourapp.extensions import db
def create_app():
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///app.db"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["SQLALCHEMY_ENGINE_OPTIONS"] = {
"pool_pre_ping": True,
}
db.init_app(app)
return app
query example
from sqlalchemy import select
from yourapp.extensions import db
from yourapp.models.user import User
stmt = select(User).where(User.username == "alice")
user = db.session.scalar(stmt)
safe raw SQL example
from sqlalchemy import text
from yourapp.extensions import db
rows = db.session.execute(
text("SELECT id, username FROM users WHERE email = :email"),
{"email": "alice@example.com"},
).all()
pagination example
from flask import request
from sqlalchemy import select
from yourapp.extensions import db
from yourapp.models.post import Post
page_obj = db.paginate(
select(Post).order_by(Post.id.desc()),
page=request.args.get("page", 1, type=int),
per_page=20,
max_per_page=100,
)
migration flow
flask db init
flask db migrate -m "create users and posts"
flask db upgrade
One-Line Memory Hooks
- ORM maps Python objects to database rows
- Engine talks to the database
- Session tracks ORM work and transactions
Mapped[...]describes the Python type of a mapped attributemapped_column()defines a DB columnrelationship()gives Python-side navigation between linked tablesForeignKey()creates the actual DB linkage- Alembic tracks schema history
- Flask-Migrate gives Alembic a Flask CLI
- PRAGMA is SQLite-only configuration SQL
- Bound parameters protect values from SQL injection
- Allowlists protect dynamic SQL structure