API documentation#

Connecting#

dataset.connect(url=None, schema=None, engine_kwargs=None, ensure_schema=True, row_type=<class 'collections.OrderedDict'>, sqlite_wal_mode=True, on_connect_statements=None)[source]#

Opens a new connection to a database.

url can be any valid SQLAlchemy engine URL. If url is not defined it will try to use DATABASE_URL from environment variable. Returns an instance of Database. Additionally, engine_kwargs will be directly passed to SQLAlchemy, e.g. set engine_kwargs={‘pool_recycle’: 3600} will avoid DB connection timeout. Set row_type to an alternate dict-like class to change the type of container rows are stored in.:

db = dataset.connect('sqlite:///factbook.db')

One of the main features of dataset is to automatically create tables and columns as data is inserted. This behaviour can optionally be disabled via the ensure_schema argument. It can also be overridden in a lot of the data manipulation methods using the ensure flag.

If you want to run custom SQLite pragmas on database connect, you can add them to on_connect_statements as a set of strings. You can view a full list of PRAGMAs here.

Notes#

  • dataset uses SQLAlchemy connection pooling when connecting to the database. There is no way of explicitly clearing or shutting down the connections, other than having the dataset instance garbage collected.

Database#

class dataset.Database(url, schema=None, engine_kwargs=None, ensure_schema=True, row_type=<class 'collections.OrderedDict'>, sqlite_wal_mode=True, on_connect_statements=None)[source]#

A database object represents a SQL database with multiple tables.

begin()[source]#

Enter a transaction explicitly.

No data will be written until the transaction has been committed.

commit()[source]#

Commit the current transaction.

Make all statements executed since the transaction was begun permanent.

create_table(table_name, primary_id=None, primary_type=None, primary_increment=None)[source]#

Create a new table.

Either loads a table or creates it if it doesn’t exist yet. You can define the name and type of the primary key field, if a new table is to be created. The default is to create an auto-incrementing integer, id. You can also set the primary key to be a string or big integer. The caller will be responsible for the uniqueness of primary_id if it is defined as a text type. You can disable auto-increment behaviour for numeric primary keys by setting primary_increment to False.

Returns a Table instance.

table = db.create_table('population')

# custom id and type
table2 = db.create_table('population2', 'age')
table3 = db.create_table('population3',
                         primary_id='city',
                         primary_type=db.types.text)
# custom length of String
table4 = db.create_table('population4',
                         primary_id='city',
                         primary_type=db.types.string(25))
# no primary key
table5 = db.create_table('population5',
                         primary_id=False)
get_table(table_name, primary_id=None, primary_type=None, primary_increment=None)[source]#

Load or create a table.

This is now the same as create_table.

table = db.get_table('population')
# you can also use the short-hand syntax:
table = db['population']
load_table(table_name)[source]#

Load a table.

This will fail if the tables does not already exist in the database. If the table exists, its columns will be reflected and are available on the Table object.

Returns a Table instance.

table = db.load_table('population')
query(query, *args, **kwargs)[source]#

Run a statement on the database directly.

Allows for the execution of arbitrary read/write queries. A query can either be a plain text string, or a SQLAlchemy expression. If a plain string is passed in, it will be converted to an expression automatically.

Further positional and keyword arguments will be used for parameter binding. To include a positional argument in your query, use question marks in the query (i.e. SELECT * FROM tbl WHERE a = ?). For keyword arguments, use a bind parameter (i.e. SELECT * FROM tbl WHERE a = :foo).

statement = 'SELECT user, COUNT(*) c FROM photos GROUP BY user'
for row in db.query(statement):
    print(row['user'], row['c'])

The returned iterator will yield each result sequentially.

rollback()[source]#

Roll back the current transaction.

Discard all statements executed since the transaction was begun.

property tables#

Get a listing of all tables that exist in the database.

Table#

class dataset.Table(database, table_name, primary_id=None, primary_type=None, primary_increment=None, auto_create=False)[source]#

Represents a table in a database and exposes common operations.

__iter__()[source]#

Return all rows of the table as simple dictionaries.

Allows for iterating over all rows in the table without explicitly calling find().

for row in table:
    print(row)
__len__()[source]#

Return the number of rows in the table.

all(*_clauses, **kwargs)#

Perform a simple search on the table.

Simply pass keyword arguments as filter.

results = table.find(country='France')
results = table.find(country='France', year=1980)

Using _limit:

# just return the first 10 rows
results = table.find(country='France', _limit=10)

You can sort the results by single or multiple columns. Append a minus sign to the column name for descending order:

# sort results by a column 'year'
results = table.find(country='France', order_by='year')
# return all rows sorted by multiple columns (descending by year)
results = table.find(order_by=['country', '-year'])

You can also submit filters based on criteria other than equality, see Advanced filters for details.

To run more complex queries with JOINs, or to perform GROUP BY-style aggregation, you can also use db.query() to run raw SQL queries instead.

property columns#

Get a listing of all columns that exist in the table.

count(*_clauses, **kwargs)[source]#

Return the count of results for the given filter set.

create_column(name, type, **kwargs)[source]#

Create a new column name of a specified type.

table.create_column('created_at', db.types.datetime)

type corresponds to an SQLAlchemy type as described by dataset.db.Types. Additional keyword arguments are passed to the constructor of Column, so that default values, and options like nullable and unique can be set.

table.create_column('key', unique=True, nullable=False)
table.create_column('food', default='banana')
create_column_by_example(name, value)[source]#

Explicitly create a new column name with a type that is appropriate to store the given example value. The type is guessed in the same way as for the insert method with ensure=True.

table.create_column_by_example('length', 4.2)

If a column of the same name already exists, no action is taken, even if it is not of the type we would have created.

create_index(columns, name=None, **kw)[source]#

Create an index to speed up queries on a table.

If no name is given a random name is created.

table.create_index(['name', 'country'])
delete(*clauses, **filters)[source]#

Delete rows from the table.

Keyword arguments can be used to add column-based filters. The filter criterion will always be equality:

table.delete(place='Berlin')

If no arguments are given, all records are deleted.

distinct(*args, **_filter)[source]#

Return all the unique (distinct) values for the given columns.

# returns only one row per year, ignoring the rest
table.distinct('year')
# works with multiple columns, too
table.distinct('year', 'country')
# you can also combine this with a filter
table.distinct('year', country='China')
drop()[source]#

Drop the table from the database.

Deletes both the schema and all the contents within it.

drop_column(name)[source]#

Drop the column name.

table.drop_column('created_at')
find(*_clauses, **kwargs)[source]#

Perform a simple search on the table.

Simply pass keyword arguments as filter.

results = table.find(country='France')
results = table.find(country='France', year=1980)

Using _limit:

# just return the first 10 rows
results = table.find(country='France', _limit=10)

You can sort the results by single or multiple columns. Append a minus sign to the column name for descending order:

# sort results by a column 'year'
results = table.find(country='France', order_by='year')
# return all rows sorted by multiple columns (descending by year)
results = table.find(order_by=['country', '-year'])

You can also submit filters based on criteria other than equality, see Advanced filters for details.

To run more complex queries with JOINs, or to perform GROUP BY-style aggregation, you can also use db.query() to run raw SQL queries instead.

find_one(*args, **kwargs)[source]#

Get a single result from the table.

Works just like find() but returns one result, or None.

row = table.find_one(country='United States')
has_column(column)[source]#

Check if a column with the given name exists on this table.

has_index(columns)[source]#

Check if an index exists to cover the given columns.

insert(row, ensure=None, types=None)[source]#

Add a row dict by inserting it into the table.

If ensure is set, any of the keys of the row are not table columns, they will be created automatically.

During column creation, types will be checked for a key matching the name of a column to be created, and the given SQLAlchemy column type will be used. Otherwise, the type is guessed from the row value, defaulting to a simple unicode field.

data = dict(title='I am a banana!')
table.insert(data)

Returns the inserted row’s primary key.

insert_ignore(row, keys, ensure=None, types=None)[source]#

Add a row dict into the table if the row does not exist.

If rows with matching keys exist no change is made.

Setting ensure results in automatically creating missing columns, i.e., keys of the row are not table columns.

During column creation, types will be checked for a key matching the name of a column to be created, and the given SQLAlchemy column type will be used. Otherwise, the type is guessed from the row value, defaulting to a simple unicode field.

data = dict(id=10, title='I am a banana!')
table.insert_ignore(data, ['id'])
insert_many(rows, chunk_size=1000, ensure=None, types=None)[source]#

Add many rows at a time.

This is significantly faster than adding them one by one. Per default the rows are processed in chunks of 1000 per commit, unless you specify a different chunk_size.

See insert() for details on the other parameters.

rows = [dict(name='Dolly')] * 10000
table.insert_many(rows)
update(row, keys, ensure=None, types=None, return_count=False)[source]#

Update a row in the table.

The update is managed via the set of column names stated in keys: they will be used as filters for the data to be updated, using the values in row.

# update all entries with id matching 10, setting their title
# columns
data = dict(id=10, title='I am a banana!')
table.update(data, ['id'])

If keys in row update columns not present in the table, they will be created based on the settings of ensure and types, matching the behavior of insert().

update_many(rows, keys, chunk_size=1000, ensure=None, types=None)[source]#

Update many rows in the table at a time.

This is significantly faster than updating them one by one. Per default the rows are processed in chunks of 1000 per commit, unless you specify a different chunk_size.

See update() for details on the other parameters.

upsert(row, keys, ensure=None, types=None)[source]#

An UPSERT is a smart combination of insert and update.

If rows with matching keys exist they will be updated, otherwise a new row is inserted in the table.

data = dict(id=10, title='I am a banana!')
table.upsert(data, ['id'])
upsert_many(rows, keys, chunk_size=1000, ensure=None, types=None)[source]#

Sorts multiple input rows into upserts and inserts. Inserts are passed to insert and upserts are updated.

See upsert() and insert_many().

Data Export#

Note: Data exporting has been extracted into a stand-alone package, datafreeze. See the relevant repository here.