Stripe
Stripe is an online payment platform that allows businesses to securely process and manage customer transactions over the Internet.
This Stripe dlt
verified source and
pipeline example
loads data using the Stripe API to the destination of your choice.
This verified source loads data from the following endpoints:
Name | Description |
---|---|
Subscription | Recurring payment on Stripe |
Account | User profile on Stripe |
Coupon | Discount codes offered by businesses |
Customer | Buyers using Stripe |
Product | Items or services for sale |
Price | Cost details for products or plans |
Event | Significant activities in a Stripe account |
Invoice | Payment request document |
BalanceTransaction | Funds movement record in Stripe |
Please note that endpoints in the verified source can be customized as per the Stripe API reference documentation.
Setup guide
Grab credentials
- Log in to your Stripe account.
- Click ⚙️ Settings in the top-right.
- Go to Developers from the top menu.
- Choose "API Keys".
- In "Standard Keys", click "Reveal test key" beside the Secret Key.
- Note down the API_secret_key for configuring secrets.toml.
Note: The Stripe UI, which is described here, might change. The full guide is available at this link.
Initialize the verified source
To get started with your data pipeline, follow these steps:
Enter the following command:
dlt init stripe_analytics duckdb
This command will initialize the pipeline example with Stripe as the source and duckdb as the destination.
If you'd like to use a different destination, simply replace
duckdb
with the name of your preferred destination.After running this command, a new directory will be created with the necessary files and configuration settings to get started.
For more information, read the guide on how to add a verified source.
Add credentials
In the
.dlt
folder, there's a file calledsecrets.toml
. It's where you store sensitive information securely, like access tokens. Keep this file safe. Here's its format for service account authentication:# put your secret values and credentials here. do not share this file and do not push it to github
[sources.stripe_analytics]
stripe_secret_key = "stripe_secret_key"# please set me up!Substitute "stripe_secret_key" with the value you copied above for secure access to your Stripe resources.
Finally, enter credentials for your chosen destination as per the docs.
For more information, read the General Usage: Credentials.
Run the pipeline
Before running the pipeline, ensure that you have installed all the necessary dependencies by running the command:
pip install -r requirements.txt
You're now ready to run the pipeline! To get started, run the following command:
python stripe_analytics_pipeline.py
Once the pipeline has finished running, you can verify that everything loaded correctly by using the following command:
dlt pipeline <pipeline_name> show
For example, the
pipeline_name
for the above pipeline example isstripe_analytics
. You may also use any custom name instead.
For more information, read the guide on how to run a pipeline.
Sources and resources
dlt
works on the principle of sources and resources.
Default endpoints
You can write your own pipelines to load data to a destination using this verified source. However, it is important to note how the ENDPOINTS
and INCREMENTAL_ENDPOINTS
tuples are defined in stripe_analytics/settings.py
.
# The most popular Stripe API's endpoints
STRIPE_ENDPOINTS = ("Subscription", "Account", "Coupon", "Customer", "Product", "Price")
# Possible incremental endpoints
# The incremental endpoints default to Stripe API endpoints with uneditable data.
INCREMENTAL_ENDPOINTS = ("Event", "Invoice", "BalanceTransaction")
Stripe's default API endpoints miss the "updated" key, triggering 'replace' mode. Use incremental endpoints for incremental loading.
Source stripe_source
This function retrieves data from the Stripe API for the specified endpoint:
@dlt.source
def stripe_source(
endpoints: Tuple[str, ...] = STRIPE_ENDPOINTS,
stripe_secret_key: str = dlt.secrets.value,
start_date: Optional[DateTime] = None,
end_date: Optional[DateTime] = None,
) -> Iterable[DltResource]:
...
endpoints
: Tuple containing endpoint names.start_date
: Start datetime for data loading (default: None).end_date
: End datetime for data loading (default: None).This source loads all provided endpoints in 'replace' mode. For incremental endpoints, use incremental_stripe_source.
Source incremental_stripe_source
This source loads data in 'append' mode from incremental endpoints.
@dlt.source
def incremental_stripe_source(
endpoints: Tuple[str, ...] = INCREMENTAL_ENDPOINTS,
stripe_secret_key: str = dlt.secrets.value,
initial_start_date: Optional[DateTime] = None,
end_date: Optional[DateTime] = None,
) -> Iterable[DltResource]:
...
endpoints
: Tuple containing incremental endpoint names.
initial_start_date
: Parameter for incremental loading; data after the initial_start_date is loaded on the first run (default: None).
end_date
: End datetime for data loading (default: None).
After each run, 'initial_start_date' updates to the last loaded date. Subsequent runs then retrieve only new data using append mode, streamlining the process and preventing redundant data downloads.
For more information, read the Incremental loading.
Customization
Create your own pipeline
If you wish to create your own pipelines, you can leverage source and resource methods from this verified source.
Configure the pipeline by specifying the pipeline name, destination, and dataset as follows:
pipeline = dlt.pipeline(
pipeline_name="stripe_pipeline", # Use a custom name if desired
destination="duckdb", # Choose the appropriate destination (e.g., duckdb, redshift, post)
dataset_name="stripe_dataset" # Use a custom name if desired
)To load endpoints like "Plan" and "Charge" in replace mode, retrieve all data for the year 2022:
source_single = stripe_source(
endpoints=("Plan", "Charge"),
start_date=pendulum.DateTime(2022, 1, 1),
end_date=pendulum.DateTime(2022, 12, 31),
)
load_info = pipeline.run(source_single)
print(load_info)To load data from the "Invoice" endpoint, which has static data, using incremental loading:
# Load all data on the first run that was created after start_date and before end_date
source_incremental = incremental_stripe_source(
endpoints=("Invoice", ),
initial_start_date=pendulum.DateTime(2022, 1, 1),
end_date=pendulum.DateTime(2022, 12, 31),
)
load_info = pipeline.run(source_incremental)
print(load_info)For subsequent runs, the dlt module sets the previous "end_date" as "initial_start_date", ensuring incremental data retrieval.
To load data created after December 31, 2022, adjust the data range for stripe_source to prevent redundant loading. For
incremental_stripe_source
, theinitial_start_date
will auto-update to the last loaded date from the previous run.source_single = stripe_source(
endpoints=("Plan", "Charge"),
start_date=pendulum.DateTime(2022, 12, 31),
)
source_incremental = incremental_stripe_source(
endpoints=("Invoice", ),
)
load_info = pipeline.run(data=[source_single, source_incremental])
print(load_info)To load data, maintain the pipeline name and destination dataset name. The pipeline name is vital for accessing the last run's state, which determines the incremental data load's end date. Altering these names can trigger a “dev_mode”, disrupting the metadata (state) tracking for incremental data loading.
Additional Setup guides
- Load data from Stripe to DuckDB in python with dlt
- Load data from Stripe to AlloyDB in python with dlt
- Load data from Stripe to YugabyteDB in python with dlt
- Load data from Stripe to Databricks in python with dlt
- Load data from Stripe to Snowflake in python with dlt
- Load data from Stripe to Neon Serverless Postgres in python with dlt
- Load data from Stripe to Timescale in python with dlt
- Load data from Stripe to Azure Cosmos DB in python with dlt
- Load data from Stripe to Redshift in python with dlt
- Load data from Stripe to ClickHouse in python with dlt