Python Pandas - SQL



Python's Pandas library provides powerful tools for interacting with SQL databases, allowing you to perform SQL operations directly in Python with Pandas. Through the pandas.io.sql module, you can query, retrieve, and save data between Pandas objects (such as DataFrame or Series) and SQL databases.

Combining Pandas library with SQL databases simplifies data analysis tasks by enabling easy data parsing and storing. In this tutorial, we will learn key Pandas SQL operations, including reading and writing data between Pandas and SQL databases, and handling data types effectively.

Integrating SQL with Pandas

Pandas enables SQL operations with minimal setup, offering a number of tools to interact with various SQL databases. This integration allows you to perform operations like reading data from a database, writing DataFrames to a SQL table, and running SQL queries directly. Before performing SQL operations with Pandas, it is important to install relevant libraries −

  • Apache Arrow ADBC Drivers: Use these for optimal performance, null handling, and type detection.

  • SQLAlchemy: If an ADBC driver isn't available, install SQLAlchemy alongside your database driver library (e.g., psycopg2 for PostgreSQL or pymysql for MySQL). SQLite is supported natively in Python's standard library.

  • SQLite with sqlite3.Connection: If SQLAlchemy is not installed, you can use a sqlite3.Connection in place of a SQLAlchemy engine, connection, or URI string.

Key Pandas Functions for SQL

Pandas provides several methods to work with SQL databases −

  • read_sql_table(): Reads a SQL database table into a Pandas DataFrame.

  • read_sql_query(): Executes a SQL query and returns the result as a DataFrame.

  • read_sql(): Reads either a table or query into a DataFrame.

  • to_sql(): Writes a DataFrame to a SQL database.

Reading Data from SQL into a Pandas DataFrame

The read_sql() method is used for reading the database table into a Pandas DataFrame or executing SQL queries and retrieving their results directly into a DataFrame. It is a convenient wrapper for both read_sql_table() and read_sql_query(), and automatically determines whether to process a table or a query based on the input.

Example

The following example demonstrates SQL integration with Pandas using SQLite. SQLite database can be used as a temporary in-memory database where data is stored in memory. In this example, we use the read_sql() to connect to the SQLite database and load the table into a Pandas DataFrame.

import pandas as pd
import sqlite3

# Create a connection to the database
conn = sqlite3.connect(":memory:")

# Create a sample table
conn.execute("CREATE TABLE Students (id INTEGER, Name TEXT, Marks REAL, Age INTEGER)")
conn.execute("INSERT INTO Students VALUES (1, 'Kiran', 80, 16), (2, 'Priya', 60, 14), (3, 'Naveen', 82, 15)")

# Query the table
query = "SELECT * FROM Students"
df = pd.read_sql(query, conn)

# Display the Output
print("DataFrame from SQL table:")
print(df)

Following is the output of the above code −

DataFrame from SQL table:
id Name Marks Age
0 1 Kiran 80.0 16
1 2 Priya 60.0 14
2 3 Naveen 82.0 15

Writing Pandas DataFrames to SQL Table

You can easily write data from a Pandas DataFrame or Series object into a SQL table using the to_sql() method. This method supports creating new tables, appending to existing ones, or overwriting existing data.

Example

The following example demonstrates writing Pandas DataFrame to SQL table using to_sql() method.

import pandas as pd
from sqlalchemy import create_engine

# Sample DataFrame
data = pd.DataFrame({
    "id": [1, 2, 3],
    "name": ["Raj", "Divya", "Charan"]
})

# Create an SQLite engine
engine = create_engine("sqlite:///:memory:")

# Write DataFrame to SQL table
data.to_sql("users", con=engine, if_exists="replace", index=False)

print("DataFrame is saved to SQL table...")

# Read and save SQL Data back to DataFrame
query = "SELECT * FROM users"
df = pd.read_sql(query, engine)

# Display the Output
print("DataFrame from SQL table:")
print(df)

Following is the output of the above code −

DataFrame is saved to SQL table...

DataFrame from SQL table:
id name
0 1 Raj
1 2 Divya
2 3 Charan

Handling SQL Data Types in Pandas

Pandas automatically maps most SQL data types to appropriate DataFrame data types. However, certain complex types may need manual handling.

Example

This example demonstrates how Pandas automatically handles the SQL data types while reading data from a SQL database.

 
import pandas as pd 
import sqlite3

# Create a connection
conn = sqlite3.connect(":memory:")

# Create a table with different SQL data types
conn.execute("CREATE TABLE TypesTest (id INTEGER, flag BOOLEAN, score REAL, name TEXT)") 
conn.execute("INSERT INTO TypesTest VALUES (1, 1, 89.5, 'Aadyaa')")

# Query and read the table into Pandas
df = pd.read_sql("SELECT * FROM TypesTest", conn)

# Display DataFrame and dtypes
print('DataFrame from SQL Database:')
print(df) 
print('\nData types of each field in output DataFrame:')
print(df.dtypes) 

The output of the above code is as follows −

DataFrame from SQL Database:
id flag score name
0 1 1 89.5 Aadyaa
Data types of each field in output DataFrame: id int64 flag int64 score float64 name object dtype: object

Pandas Managing Datetime Columns in SQL Tables

The to_sql() method can handle both timezone-naive and timezone-aware datetime data. However, the way it is stored depends on the SQL database system used.

Example

This example demonstrates how Pandas manages the Dateime datatype while parsing the data from a SQL table. Below you can observe how Pandas can automatically converts the event_date column into datetime64.

 
import pandas as pd 
import sqlite3

# Create a connection
conn = sqlite3.connect(":memory:")

# Create a table with datetime values
conn.execute("CREATE TABLE Events (id INTEGER, event_date TEXT)") 
conn.execute("INSERT INTO Events VALUES (1, '2025-01-01'), (2, '2025-01-15')")

# Read data and parse datetime
df = pd.read_sql("SELECT * FROM Events", conn, parse_dates=["event_date"])

# Display DataFrame and dtypes
print('Output DataFrame from SQL Database:')
print(df) 
print('\nData types of fields in the output DataFrame:')
print(df.dtypes)

Following is the output of the above code −

Output DataFrame from SQL Database:
id event_date
0 1 2025-01-01
1 2 2025-01-15
Data types of fields in the output DataFrame: id int64 event_date datetime64[ns] dtype: object

Pandas Querying SQL Database

Pandas allows you to run SQL queries directly from Python, simplifying database operations for data analysis.

Example

The following example filters records from the database based on the amount field using "WHERE" clause.

 
import pandas as pd 
import sqlite3

# Create a connection
conn = sqlite3.connect(":memory:")

# Create a sample table
conn.execute("CREATE TABLE Sales (id INTEGER, product TEXT, amount REAL)") 
conn.execute("INSERT INTO Sales VALUES (1, 'Phone', 200.50), (2, 'Laptop', 800.00), (3, 'Tablet', 300.00)")

# Query the table with specific conditions
query = "SELECT * FROM Sales WHERE amount > 250" 
df = pd.read_sql(query, conn)

# Display the results
print("Output DataFrame from SQL Database after query:")
print(df)

The output of the above code is as follows −

Output DataFrame from SQL Database after query:
id product amount
0 2 Laptop 800.0
1 3 Tablet 300.0
Advertisements