Boto3 vs Botocore: Understanding the AWS Python SDK Layers
If you're building Python applications that interact with AWS services—whether spinning up EC2 instances, managing S3 buckets, querying DynamoDB, or automating Lambda—you've likely encountered boto3 and botocore. These two packages sit at the heart of AWS's official Python SDK, but they serve different roles. Many developers install boto3 and never think twice about botocore, yet understanding their relationship can help you write cleaner, more efficient code and avoid common pitfalls.
In this post, we'll break down what each package does, how they differ, when to use one over the other, and why boto3 dominates download stats while botocore quietly powers everything behind the scenes.
AWS maintains an open-source Python SDK split into layers for modularity and reuse:
Botocore — The low-level foundation. It handles core functionality like:
Botocore is the shared core that powers both boto3 and the AWS CLI (command-line tool). It's designed to be lightweight and precise, driven by automatically generated JSON service descriptions so it's always up-to-date with the latest AWS APIs.
Boto3 — The high-level SDK built on top of botocore. It adds Pythonic abstractions:
s3.Bucket('my-bucket').objects.all()) that feel more natural in PythonIn short: You almost always install and use boto3, but botocore is the engine running underneath.
| Aspect | Botocore | Boto3 |
|-------------------------|---------------------------------------|--------------------------------------------|
| Level | Low-level (raw API calls) | High-level (Pythonic abstractions) |
| Primary Use | Building tools like AWS CLI or custom low-level clients | Everyday AWS integration in apps/scripts |
| Interface | botocore.client('s3').list_buckets() | boto3.client('s3').list_buckets() or boto3.resource('s3').buckets.all() |
| Ease of Use | More verbose, manual response handling | Cleaner, more readable code |
| Abstractions | None (direct API ops) | Resources, collections, paginators |
| Dependencies | Minimal | Depends on botocore + extras (jmespath, s3transfer) |
| Maintenance | Actively developed by AWS | Actively developed by AWS |
| When to Use | Rare: custom SDKs, extreme control | Almost always: standard Python + AWS work |
Listing S3 Buckets (Low-Level with Botocore)
import botocore.session
session = botocore.session.get_session()
client = session.create_client('s3')
response = client.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])
Same Task with Boto3 (Recommended)
import boto3
# Client approach (still low-level but nicer)
client = boto3.client('s3')
response = client.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])
# Resource approach (high-level, Pythonic)
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
The boto3 resource version is shorter, more intuitive, and handles pagination/retries automatically in many cases.
From PyPI trends (as tracked on sites like pypistats.com):
Why the gap?
Both are infrastructure powerhouses—cloud workloads drive insane volume for AWS-related packages.
Rarely, but valid cases include:
For 99% of Python + AWS work—scripts, web apps, data pipelines, serverless functions—stick with boto3.
Pro tip: Always configure credentials properly (via environment vars, ~/.aws/credentials, IAM roles) and use sessions for multi-account or multi-region work.
Curious about trends for these packages on pypistats.com? Check their dashboards for real-time health scores, version breakdowns, and comparisons. If you're watching boto3/botocore in production, set up alerts for unusual download spikes/drops—ecosystem shifts can signal broader AWS usage patterns.
Happy coding in the cloud! 🚀