Only this pageAll pages
Powered by GitBook
1 of 9

1.0

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Query builder

Preface

The query builder is a class which is used to build up a query for execution later. For example if you need multiple wheres for a query you can chain them together on this QueryBuilder class. The class is then modified until you want to execute the query. Models use the query builder under the hood to make all of those calls. Many model methods actually return an instance of QueryBuilder so you can continue to chain complex queries together.

Using the query builder class directly allows you to make database calls without needing to use a model.

Getting the QueryBuilder class

To get the query builder class you can simply import the query builder. Once imported you will need to pass the connection_details dictionary you store in your config.database file:

from masoniteorm.query import QueryBuilder

builder = QueryBuilder().table("users")

You can also switch or specify connection on the fly using the on method:

from masoniteorm.query import QueryBuilder

builder = QueryBuilder().on('staging').table("users")

from_("users") is also a valid alias for the table("users") method. Feel free to use whatever you feel is more expressive.

You can then start making any number of database calls.

Models

By default, the query builder will return dictionaries or lists depending on the result set. Here is an example of a result using only the query builder:

# Without models
user = QueryBuilder().table("users").first()
# == {"id": 1, "name": "Joe" ...}

# With models
from masoniteorm.models import Model

class User(Model):
    pass

user = QueryBuilder(model=User).table("users").first()
# == <app.models.User>

Fetching Records

Select

builder.table('users').select('username').get()
# SELECT `users`.`username` FROM `users`

You can also select a table and column:

builder.table('users').select('profiles.name').get()
# SELECT `profiles`.`name` FROM `users`

You can also select a table and an asterisk (*). This is useful when doing joins:

builder.table('users').select('profiles.*').get()
# SELECT `profiles`.* FROM `users`

Lastly you can also provide the column with an alias by adding as to the column select:

builder.table('users').select('profiles.username as name').get()
# SELECT `profiles`.`username` AS name FROM `users`

First

You can easily get the first record:

builder.table('users').first()
# SELECT * from `users` LIMIT 1

All Records

You can also simply fetch all records from a table:

builder.table('users').all()
# SELECT * from `users`

The Get Method

Once you start chaining methods you should call the get() method instead of the all() method to execute the query.

For example, this is correct:

builder.table('users').select('username').get()

And this is wrong:

builder.table('users').select('username').all()

Wheres

You may also specify any one of these where statements:

The simplest one is a "where equals" statement. This is a query to get where username equals Joe AND age equals 18:

builder.table('users').where('username', 'Joe').where('age', 18).get()

You can also use a dictionary to build the where method:

builder.table('users').where({"username": "Joe", "age": 18}).get()

You can also specify comparison operators:

builder.table('users').where('age', '=', 18).get()
builder.table('users').where('age', '>', 18).get()
builder.table('users').where('age', '<', 18).get()
builder.table('users').where('age', '>=', 18).get()
builder.table('users').where('age', '<=', 18).get()

Where Null

Another common where clause is checking where a value is NULL:

builder.table('users').where_null('admin').get()

This will fetch all records where the admin column is NULL.

Or the inverse:

builder.table('users').where_not_null('admin').get()

This selects all columns where admin is NOT NULL.

Where In

In order to fetch all records within a certain list we can pass in a list:

builder.table('users').where_in('age', [18,21,25]).get()

This will fetch all records where the age is either 18, 21 or 25.

Where Like

You can do a WHERE LIKE or WHERE NOT LIKE query:

builder.table('users').where_like('name', "Jo%").get()
builder.table('users').where_not_like('name', "Jo%").get()

Subqueries

You can make subqueries easily by passing a callable into the where method:

builder.table("users").where(lambda q: q.where("active", 1).where_null("activated_at")).get()
# SELECT * FROM "users" WHERE ("users"."active" = '1' AND "users"."activated_at" IS NULL)

You can also so a subquery for a where_in statement:

builder.table("users").where_in("id", lambda q: q.select("profile_id").table("profiles")).get()
# SELECT * FROM "users" WHERE "id" IN (SELECT "profiles"."profile_id" FROM "profiles")

Select Subqueries

You can make a subquery in the select clause. This takes 2 parameters. The first is the alias for the subquery and the second is a callable that takes a query builder.

builder.table("stores").add_select("sales", lambda query: (
    query.count("*").from_("sales").where_column("sales.store_id", "stores.id")
)).order_by("sales", "desc")

This will add a subquery in the select part of the query. You can then order by or perform wheres on this alias.

Here is an example of all stores that make more than 1000 in sales:

builder.table("stores").add_select("sales", lambda query: (
    query.count("*").from_("sales").where_column("sales.store_id", "stores.id")
)).where("sales", ">", "1000")

Conditional Queries

Sometimes you need to specify conditional statements and run queries based on the conditional values.

For example you may have code that looks like this:

def show(self, request: Request):
    age = request.input('age')
    article = Article.where('active', 1)
    if age >= 21:
        article.where('age_restricted', 1)

Instead of writing the code above you can use the when method. This method accepts a conditional as the first parameter and a callable as the second parameter. The code above would look like this:

def show(self, request: Request):
    age = request.input('age')
    article = Article.where('active', 1).when(age >= 21, lambda q: q.where('age_restricted', 1))

If the conditional passed in the first parameter is not truthy then the second parameter will be ignored.

Limits / Offsets

It's also very simple to use both limit and/or offset a query.

Here is an example of a limit:

builder.table('users').limit(10).get()

Here is an example of an offset:

builder.table('users').offset(10).get()

Or here is an example of using both:

builder.table('users').limit(10).offset(10).get()

Between

You may need to get all records where column values are between 2 values:

builder.table('users').where_between('age', 18, 21).get()

Group By

You may want to group by a specific column:

builder.table('users').group_by('active').get()

You can also specify a multiple column group by:

builder.table('users').group_by('active, name, is_admin').get()

Group By Raw

You can also group by raw:

builder.table('users').group_by_raw('COUNT(*)').get()

Having

Having clauses are typically used during a group by. For example, returning all users grouped by salary where the salary is greater than 0:

builder.table('users').sum('salary').group_by('salary').having('salary').get()

You may also specify the same query but where the sum of the salary is greater than 50,000

builder.table('users').sum('salary').group_by('salary').having('salary', 50000).get()

Joining

Creating join queries is very simple.

builder.join('other_table', 'column1', '=', 'column2')

This will build a JoinClause behind the scenes for you.

Advanced Joins

Advanced joins are for use cases where you need to compile a join clause that is more than just joining on 2 distant columns. Advanced joins are where you need additional on or where statements.There are currently 2 ways to perform an advanced where clause.

The first way is that you may create your own JoinClause from scratch and build up your own clause:

from masoniteorm.expressions import JoinClause

clause = (
    JoinClause('other_table as ot')
    .on('column1', '=', 'column2')
    .on('column3', '=', 'column4')
    .where('column3', '>', 4)
)

builder.join(clause)

The second way is passing a "lambda" to the join method directly which will return you a JoinClause class you can build up. This way is a bit more cleaner:

builder.join('other_table as ot', lambda join: (
    (
        join.on('column1', '=', 'column2')
        .on('column3', '=', 'column4')
        .where('column3', '>', 4)
    )
))

Left Join

builder.table('users').left_join('table1', 'table2.id', '=', 'table1.table_id')

and a right join:

Right Join

builder.table('users').right_join('table1', 'table2.id', '=', 'table1.table_id')

Increment

There are times where you really just need to increment a column and don't need to pull any additional information. A lot of the incrementing logic is hidden away:

builder.table('users').increment('status')

Decrementing is also similiar:

Decrement

builder.table('users').decrement('status')

You also pass a second parameter for the number to increment the column by.

builder.table('users').increment('status', 10)
builder.table('users').decrement('status', 10)

Pagination

Sometimes you'll want to paginate through a result set. There are 2 ways to pagainate records.

The first is a "length aware" pagination. This means that there will be additional results on the pagination like the total records. This will do 2 queries. The initial query to get the records and a COUNT query to get the total. For large or complex result sets this may not be the best choice as 2 queries will need to be made.

builder.table("users").where("active", 1).paginate(number_of_results, page)

You may also do "simple pagination". This will not give you back a query total and will not make the second COUNT query.

builder.table("users").where("active", 1).simple_paginate(number_of_results, page)

Aggregates

There are several aggregating methods you can use to aggregate columns:

Sum

salary = builder.table('users').sum('salary').first().salary

Notice the alias for the aggregate is the name of the column.

Average

salary = builder.table('users').avg('salary').first().salary

Notice the alias for the aggregate is the name of the column.

Count

salary = builder.table('users').count('salary').first().salary

You can also count all:

salary = builder.table('users').count('salary').first().salary

Max

salary = builder.table('users').max('salary').first().salary

Min

salary = builder.table('users').min('salary').first().salary

Aliases

You may also specify an alias for your aggregate expressions. You can do this by adding "as {alias}" to your aggregate expression:

builder.table('users').sum('salary as payments').get()
#== SELECT SUM(`users`.`salary`) as payments FROM `users`

Order By

You can easily order by:

builder.order_by("column")

The default is ascending order but you can change directions:

builder.order_by("column", "desc")

You can also specify a comma separated list of columns to order by all 3 columns:

builder.order_by("name, email, active")

You may also specify the sort direction on each one individually:

builder.order_by("name, email desc, active")

This will sort name and active in ascending order because it is the default but will sort email in descending order.

These 2 peices of code are the same:

builder.order_by("name, active").order_by("name", "desc")
builder.order_by("name, email desc, active")

Order By Raw

You can also order by raw. This will pass your raw query directly to the query:

builder.order_by_raw("name asc")

Creating Records

You can create records by passing a dictionary to the create method. This will perform an INSERT query:

builder.create({"name": "Joe", "active": 1})

Bulk Creating

You can also bulk create records by passing a list of dictionaries:

builder.bulk_create([
    {"name": "Joe", "active": 1},
    {"name": "John", "active": 0},
    {"name": "Bill", "active": 1},
])

Raw Queries

If some queries would be easier written raw you can easily do so for both selects and wheres:

builder.table('users').select_raw("COUNT(`username`) as username").where_raw("`username` = 'Joe'").get()

You can also specify a fully raw query using the statement method. This will simply execute a query directly and return the result rather than building up a query:

builder.statement("select count(*) from users where active = 1")

You can also pass query bindings as well:

builder.statement("select count(*) from users where active = '?'", [1])

You can also use the Raw expression class to specify a raw expression. This can be used with the update query:

from masoniteorm.expressions import Raw

builder.update({
    "name": Raw('"alias"')
})
# == UPDATE "users" SET "name" = "alias"

Chunking

If you need to loop over a lot of results then consider chunking. A chunk will only pull in the specified number of records into a generator:

for users in builder.table('users').chunk(100):
    for user in users:
        user #== <User object>

Getting SQL

If you want to find out the SQL that will run when the command is executed. You can use to_sql(). This method returns the full query without bindings. The actual query sent to the database is a "qmark query" (see below). This to_sql() method is mainly for debugging purposes and should not be sent directly to a database as the result with have no query bindings and will be subject to SQL injection attacks. Use this method for debugging purposes only.

builder.table('users').count('salary').where('age', 18).to_sql()
#== SELECT COUNT(`users`.`salary`) AS salary FROM `users` WHERE `users`.`age` = '18'

Getting Qmark

Qmark is essentially just a normal SQL statement except that the query is replaced with quoted question marks ('?'). The values that should have been in the position of the question marks are stored in a tuple and sent along with the qmark query to help in sql injection. The qmark query is the actual query sent using the connection class.

builder.table('users').count('salary').where('age', 18).to_qmark()
#== SELECT COUNT(`users`.`salary`) AS salary FROM `users` WHERE `users`.`age` = '?'

Note: qmark queries will reset the query builder and remove things like aggregates and wheres from the builder class. Because of this, writing get() after to_qmark will result in incorrect queries (because things like wheres and aggregates will be missing from the final query). If you need to debug a query, please use the to_sql() method which does not have this kind of resetting behavior.

Updates

Updating Records

You can update many records.

builder.where('active', 0).update({
    'active': 1
})
# UPDATE `users` SET `users`.`active` = 1 where `users`.`active` = 0

Deletes

Deleting Records

You can delete many records as well. For example, deleting all records where active is set to 0.

builder.where('active', 0).delete()

Truncating

You can also truncate directly from the query builder:

builder.truncate('users')

You may also temporarily disable and re-enable foreign keys to avoid foreign key checks.

builder.truncate('users', foreign_keys=True)

Available Methods

Aggregates

Method
Description

.avg('column')

Gets the average of a column. Can also use an as modifier to alias the .avg('column as alias').

.sum('column')

Gets the sum of a column. Can also use an as modifier to alias the .sum('column as alias').

.count('column')

Gets the count of a column. Can also use an as modifier to alias the .count('column as alias').

.max('column')

Gets the max value of a column. Can also use an as modifier to alias the .max('column as alias').

.min('column')

Gets the min value of a column. Can also use an as modifier to alias the .min('column as alias').

Joins

Method
Description

.join('table1', 'table2.id', '=', 'table1.table_id')

Joins 2 tables together. This will do an INNER join. Can control which join is performed using the clause parmeter. Can choose inner, left or right.

.left_join('table1', 'table2.id', '=', 'table1.table_id')

Joins 2 tables together. This will do an LEFT join.

.right_join('table1', 'table2.id', '=', 'table1.table_id')

Joins 2 tables together. This will do an RIGHT join.

Where Clauses

Method
Description

.between('column', 'value')

Peforms a BETWEEN clause.

.not_between('column', 'value')

Peforms a NOT BETWEEN clause.

.where('column', 'value')

Peforms a WHERE clause. Can optionally choose a logical operator to use .where('column', '=', 'value'). Logical operators available include: <, >, >=, <=, !=, =, like, not like

.or_where('column', 'value')

Peforms a OR WHERE clause. Can optionally choose a logical operator to use .where('column', '=', 'value'). Logical operators available include: <, >, >=, <=, !=, =, like, not like

.where_like('column', 'value')

Peforms a WHERE LIKE clause.

.where_not_like('column', 'value')

Peforms a WHERE NOT LIKE clause.

.where_exists(lambda q: q.where(..))

Peforms an EXISTS clause. Takes a lambda expression to indicate which subquery should generate.

.where_not_exists(lambda q: q.where(..))

Peforms a NOT EXISTS clause. Takes a lambda expression to indicate which subquery should generate.

.where_column('column1', 'column2')

Peforms a comparison between 2 columns. Logical operators available include: <, >, >=, <=, !=, =

.where_in('column1', [1,2,3])

Peforms a WHERE IN clause. Second parameter needs to be a list or collection of values.

.where_not_in('column1', [1,2,3])

Peforms a WHERE NOT IN clause. Second parameter needs to be a list or collection of values.

.where_null('column1')

Peforms a WHERE NULL clause.

.where_not_null('column1')

Peforms a WHERE NOT NULL clause.

Pessimistic Locking

The query builder includes a few functions to help you do “pessimistic locking” on your SELECT statements.

To run the SELECT statement with a “shared lock”, you may use the shared_lock method on a query:

builder.where('votes', '>', 100).shared_lock().get()

To “lock for update” on a SELECT statement, you may use the lock_for_update method on a query:

builder.where('votes', '>', 100).lock_for_update().get()

Raw Queries

Method
Description

.select_raw('SUM("column")')

specifies a raw string where the select expression would go.

.where_raw('SUM("column")')

specifies a raw string where the WHERE expression would go.

.order_by_raw('column1, column2')

specifies a raw string where the ORDER BY expression would go.

.group_by_raw('column1, column2')

specifies a raw string where the GROUP BY expression would go.

Modifiers

Method
Description

.limit('10')

Limits the results to 10 rows

.offset(10)

Offsets the results by 10 rows

.take(10)

Alias for the limit method

.skip(10)

Alias for the offset method

.group_by('column')

Adds a GROUP BY clause.

.having('column')

Adds a HAVING clause.

.increment('column')

Increments the column by 1. Can pass in a second parameter for the number to increment by. .increment('column', 100).

.decrement('column')

Decrements the column by 1. Can pass in a second parameter for the number to increment by. .decrement('column', 100).

DML

Method
Description

.add_select("alias", lambda q: q.where(..))

Performs a SELECT subquery expession.

.all()

Gets all records.

.chunk(100)

Chunks a result set. Uses a generator to keep each chunk small. Useful for chunking large data sets where pulling too many results in memory will overload the application

.create({})

Limits the results to 10 rows. Must take a dictionary of values.

.delete()

Performs a DELETE query based on the current clauses already chained onto the query builder.

.first()

Gets the first record

.from_('users')

Sets the table.

.get()

Gets all records. Used in combination with other builder methods to finally execute the query.

.last()

Gets the last record

.paginate(limit, page)

Paginates a result set. Pass in different pages to get different results. This a length aware pagination. This will perform a COUNT query in addition to the original query. Could be slower on larger data sets.

.select('column')

Offsets the results by 10 rows. Can use the as keyword to alias the column. .select('column as alias')

.simple_paginate(limit, page)

Paginates a result set. Pass in different pages to get different results. This not a length aware pagination. The result will not contain the total result counts

.statement("select * from users")

Performs a raw query.

.table('users')

Alias for the from_ method.

.truncate('table')

Truncates a table. Can pass a second parameter to disable and enable foreign key constraints. truncate('table', foreign_keys=True)

.update({})

dictionary values to update the record with.

Testing

Method
Description

.to_sql()

Returns a string of the fully compiled SQL to be generated.

.to_qmark('')

Returns a string of the SQL to generated but with ? values where the sql bindings are placed. Also resets the query builder instance.

Low Level Methods

These are lower level methods that may be useful:

Method
Description

.new()

Creates a new clean builder instance. This instance does not have any clauses, selects, limits, etc from the original builder instance. Great for performing subqueries

.where_from_builder()

Creates a WHERE clause from a builder instance.

.get_table_name()

Gets the tables name.

White Page

Outline ORM White Paper

The Flow

I will discuss the flow at a high level first and then can talk about each part separately.

There are a few different paths you can start out with. Not everything starts out at the model level. You may use the query builder class directly to build your queries. The query builder class is exactly that: a class to build queries. So you will interact with this class (more on the class later) and it will set things like wheres, limits, selects, etc to the class and then pass all that off to build a query.

The Model

First let's talk about the flow of the Model. The Model is probably what most people will be using the majority of the time. The Model is basically an wrapper entity around a table. So 1 table will likely equal to 1 model. A users table will have a User model and a articles table will have an Article model.

The interesting things about the Model is that its just a shell around the QueryBuilder class. The majority of the time you call something on the Model it's actually just building a query builder class immediately and passing the rest of the call off. This is important to understand:

For example,

Since it returns a query builder we can simply build up this class and chain on a whole bunch of methods:

Finally when we are done building a query we will call a .get() which is basically an executionary command:

When you call get, the query builder will pass everything you built up (1 select, 2 where statements) and pass those into a Grammar class. The Grammar class is responsible for looping through the 3 statements and compiling them into a SQL query that will run. So the Grammar class will compile a query that looks like this:

If it needs to build a Qmark query (a query with question marks which will be replaced with query bindings to prevent SQL injection) then it will look like this:

and have 2 query bindings:

Once we get the query we can then pass the query into the connection class which will connect to the MySQL database to send the query.

We will then get back a dictionary from the query and "hydrate" the original model. When you hydrate a model it simply means we set the dictionary result into the model so when we access something like user.name we will get the name of the user. Think of it as loading the dictionary into the class to be used later during accession and setting.

Grammar Classes

Grammar classes are classes which are responsible for the compiling of attributes into a SQL statement. Grammar classes are used for DML statements (select, insert, update and delete). Grammars are not used for DDL statements (create and alter). The SQL statement will then be given back to whatever called it (like the QueryBuilder class) and then passed to the connection class to make the database call and return the result. Again the grammar class is only responsible for compiling the query into a string. Simply taking attributes passed to it and looping through them and compiling them into a query.

The grammar class will be responsible for both SQL and Qmark. Again, SQL looks like this:

And Qmark is this:

Qmark queries will then be passed off to the connection class with a tuple of bindings like (18,). This helps protect against SQL injection attacks. All queries passed to the connection class should be the qmark query. Compiling SQL is really for debugging purposes while developing. Passing straight SQL into the connection class could leave queries open to SQL injection.

Any values should be able to be qmarked. This is done inside the grammar class by replacing the value with a '?' and then adding the value to the bindings. The grammar class knows it should be qmarked by passing the qmark boolean variable throughout the grammar class.

The grammar class is also really an abstraction as well. All the heavy lifting is done inside the BaseGrammar class. Child classes (like MySQLGrammar and PostgresGrammar, etc) really just contain the formatting of the sql strings.

Currently there are 2 different grammar classes for each of the supported grammars. There is one for normal queries and one for schema queries. They could be 1 big class but the class would be giant and it is hard to maintain a god class like this responsable for everything. It also makes it harder to first build the grammar up for quering (selects, updates, deletes, etc) and then later support schema building.

Almost all SQL is bascially the same but with slightly different formats or placements for where some syntax goes. This is why this structure we use is so powerful and easy to expand or fix later on.

For example, MySQL has this format for select statements with a limit:

But Microsoft SQL Server has this:

Notice the SQL is bascially the same but the limiting statement is in a different spot in the SQL.

We can accomplish this by specifying the general: select, insert, update and delete formats so we can better organize and swap the placement later. We do this by using Python keyword string interpolation. For example let's break down to a more low level way on how we can accomplish this:

Here is the MySQL grammar class select statement structure. I will simplify this for the sake of explanation but just know this also contains the formatting for joins, group by's in the form of {joins}, {group_by} etc:

MySQL:

Microsoft SQL:

Simply changing the order in this string will allow us to replace the format of the SQL statement generated. The last step is to change exactly what the word is.

Again, MySQL is LIMIT X and Microsoft is TOP X. We can accomplish this by specifying the differences in their own method. Remember these are all in the subclasses of the grammar class. Mysql is in MySQLGrammar and Microsoft is in MSSQLGrammar

MySQL:

and Microsoft:

Now we have abstracted the differences into their own classes and class methods. Now when we compile the string, everything falls into place. This code snippet is located in the BaseGrammar class (which calls the supported grammar class we built above).

Let's remove the abstractions and explode the variables a bit so we can see more low level what it would be doing:

MySQL:

Microsoft:

So notice here the abstractions can be changed per each grammar for databases with different SQL structures. You just need to change the response of the string returning methods and the structure of the select_format methods

Format Strings

The child grammar classes have a whole bunch of these statements for getting the smaller things like a table

Most methods in the child grammar classes are actually just these strings.

MySQL tables are in the format of this:

Postgres and SQLite tables are in the format of this:

and Microsoft are this:

So again we have the exact same thing on the grammar class like this:

Which unabstracted looks like this for MySQL:

and this for Microsoft:

There are a whole bunch of these methods in the grammar classes for a whole range of things. Any differences that there can possible be between databases are abstracted into these methods.

Compiling Methods

There are a whole bunch of methods that begin with process_ or _compile_ so let's explain what those are.

Now that all the differences between grammars are abstracted into the child grammar classes, all the heavy listing can be done in the BaseGrammar class which is the parent grammar class and really the engine behind compiling the queries for all grammars.

This BaseGrammar class is responsible for doing the actual compiling in the above section. So this class really just has a bunch of classes like process_wheres, process_selects etc. These are more supporting methods that help process the sql strings for the _compile_ methods.

There are also methods that begin with _compile_. These are for responsable for compiling the actual respective queries. The heart of this class really lies in the _compile_select, _compile_create, _compile_update, _compile_delete methods.

Let's bring back the unabstracted version first:

Now let's start abstracting until we get what is really in the class.

And now what that method would really look likes with the supporting _compile methods in place:

So notice we have a whole bunch of _compile methods but they are mainly just for supporting the main compiling of the select, create or alter statements.

And now finally what the method actually looks like in the class:

Models and Query Builder

Models and query builders are really hand in hand. In almost all cases, a single method on the model will pass everything off to the QueryBuilder class immediately.

Just know the Model is really just a small proxy for the QueryBuilder. Most methods on the model simply call the QueryBuilder so we will focus on the QueryBuilder.

The only thing the model class does is contains some small settings like the table name, the attributes after a database call is made (query results) and some other small settings like the connection and grammar to use.

It is important though to know the differences between class (cls) and an object instance. Be sure to read the section below.

Meta Classing

One of the trickier bits of magic we have when it comes to the model is we set a meta class on the Model class (the base class that all of your User and Article models will inherit). What this does is essentially creates a middleware between first calling methods. Since its really hard to do everything while handling different class instantances and class classes it's easier to catch the call and turn it into an instance before moving on.

This is hard to explain but let's see what this really solves:

We COULD just do this with everything:

And then perform model calls:

But it doesn't look as clean as:

(Also for backwards compatability with Orator it would be a huge change if we didn't support this).

So if you look at the Model.py class we have a meta class inherited (you'll notice if you look at the file) which actually does a bit of magic and actually instanitates the class before any methods are called. This is similiar to any normal Python hook you can tie into like __getattr__.

This makes handling cls and self much easier. Although there are special use cases where we need to handle cls directly which is why you will see some @classmethod decorators on some model methods.

Pass Through

We mentioned that the model simply constructs a query builder and essentially passes everything off to the query builder class.

The issue though is that when you call something like User.where(..) it will call the where on the User class. Since theres actually no where method on the model class it will hook into the __getattr__ on the model class. From there we catch a bunch of different methods located in the __passthrough__ attribute on the model and pass that right off to the query builder. This is important to understand.

Query Builder

This QueryBuilder class is responsible for building up the query so it will have a whole bunch of attributes on it that will eventually be passed off to the grammar class and compiled to SQL. That SQL will then be passed to the connection class and will do the database call to return the result.

The QueryBuilder class is really the meat and potatoes of the ORM and really needs to be perfect and will have the most features and will take the most time to build out and get right.

For example, when you call where on the model it will pass the info to the query builder and return this QueryBuilder class.

All additional calls will be done on THAT query builder object:

Finally when you call a method like .get() it will return a collection of results.

If you call first() it will return a single model:

So again we use the QueryBuilder to build up a query and then later execute it.

Expression Classes

There are a few different classes which will aid in the compiling of SQL from the grammar class. These really are just various classes with different attributes on them. They are internal only classes made to better compile things inside the BaseGrammar class, since we use things like isinstance checks and attribute conditionals. You will not be using these directly when developing applications. These classes are:

  • QueryExpression - Used for compiling of where statements

  • HavingExpression - Used for the compiling of Having statements

  • JoinExpression - Used for the compiling of Join statements

  • UpdateExpression - Used for the compiling of Update statements.

  • SubSelectExpression - Used for compiling sub selects. Sub selects can be placed inside where statements to make complex where statements more powerful

  • SubGroupExpression- Used to be passed into a callable to be executed on later. This is useful again for sub selects but just a layer of abstraction for callables

These are simply used when building up different parts of a query. When the _compile_wheres, _compile_update and other methods are ran on the grammar class, these just make it more simple to fetch the needed data and are not too generic to make difficult use cases challenging to code for.

How classes interact with eachother

Model -> QueryBuilder

The Model passes off anything set on it directly to the query builder once accessed. All calls after will be based on a new query builder class. All query building will be done on this class.

QueryBuilder -> Grammar

To be more clear, once we are done building the query and then call .get() or .first(), all the wheres, selects, group_by's etc are passed off to the correct grammar class like MySQLGrammar which will then compile down to a SQL string.

QueryBuilder -> Connection

That SQL string returned from the grammar class is then sent to the connection class along with the bindings from the grammar class. We then have a result in the form of a dictionary. We don't want to be working with a bunch of dictionaries though, we want to work with more models.

QueryBuilder Hydrating

The QueryBuilder object when returning the response is also responsible for hydrating your models if a model is passed in. If no model is passed into the initializer then it will just return a dictionary or list. Hydrating is really just a fancy word for filling dummy models with data. We really don't want to work with dictionaries in our project so we take the dictionary response and shove it into a Model and return the model. Now we have a class much more useful than a simple dictionary.

For times we have several results (a list of dictionaries) we simply loop through the list and fill a different model with each dictionary. So if we have a result of 5 results we loop through each one and build up a collection of 5 hydrated models. We do this by calling the .hydrate() method which creates a new instance and hydrates the instance with the dictionary.

Relationships

RELATIONSHIPS ARE STILL A WORK IN PROGRESS AND SUBJECT TO CHANGE

Relationships are a bit magical and uses a lot of internal low level Python magic to get right. We needed to do some Python class management magic to nail the inherently magical nature of the relationship classes. For example we have a relationship like this:

This is innocent enough but we would like when you access something like this:

BUT we also want to be able to extend the relationship as well:

so we need to both access the attribute AND call the attribute. Very strange I know. How would we get an attribute accession to:

  • find the correct model in the method

  • build the query

  • Find the correct foreign key's to fetch on

  • return a fully hydrated model ready to go

  • but when you call it simple do the wheres and return the query builder.

For this we do some decorator and attribute accession magic using the __get__ magic method which is called whenever an attribute is accessed. We can then hijack this hook and return whatever we need. In this case, a fully hydrated model or a query builder.

Relationship classes

Its useful to explain the relationship classes.

We have a BaseRelationship class which really just contains all the magic we need for the actual decorator to work.

We then have a BelongsTo relationship (which is imported as belongs_to in the __init__.py file so this is where the name change comes from in the decorator) which has a simple apply_query method with does the query needed to return the connection using the models QueryBuilder. Here we have foreign and owner variables. foreign is the relationship class (In this case, Profile) and owner is the current model (in this case User).

The query is applied and returns a result from the query builder in the form of a dictionary or a list (for one result it will be a dictionary and if multiple are returned it will be a list). Then the normal process takes its course. If a dictionary it will return a hydrated model and if a list is returned it will return a collection of hydrated models.

Schema Class

The Schema class is responsible for the creation and altering of tables so will have a slightly different syntax for building a normal Query Builder class. Here we don't have things like where and limit. Instead of have things in the format of:

Classes

So now let's talk about how each class of the 3 primary classes talk to eachother here.

Schema -> Blueprint

The Schema class is responsible for specifying the table and/or the connection to use. It will then will pass that information off to the Blueprint class which really is the same thing as the relationship between Model and QueryBuilder. The Schema class is also responsible for setting either the create or alter modes. This is set if you either use Schema.create('users') or Schema.table('users') respectively.

The Blueprint class is similiar to the QueryBuilder class because both simply build up a bunch of columns to to act on. One is just used for fetching data and the other is used for changing or creating tables.

The Schema class calls the blueprint class as a context manager.

The blueprint class will be built up in this format:

Notice we are just building up a blueprint class.

When we start up the blueprint class, if we are creating columns then we will be setting additional attributes on a Table class. If we are updating a table then we will be setting attributes on the TableDiff class.

For example when we call:

it is a proxy call to

The blueprint class then builds up the table class.

Blueprint -> Platform

Compiling DDL statements are much more complicated than compiling DML statements so there is an entire class dedicated to compiling DDL statements. The Platform classes are similiar to Grammar classes as they are both used to compile sql.

For example in SQLite there is an extremely limited alter statement. So adding, renaming or modifying columns relies on actually creating temporary tables, migrating the existing table to the temp table, then creating a new table based on the existing and modified schema, then migrating the old columns to the new columns and then finally dropping the temp table. You can see how this is not generic so it requires its own logic.

Because of this, there are Platform classes. SQLitePlatform, MySQLPlatform, etc. These class have a compile_create_sql and compile_alter_sql methods. These methods take a single table class. The same table class the blueprint class built up.

This Table class has methods like added_columns, removed_indexes, etc. We can use these to build up our alter and create statements.

For example, Postgres requires alter statements for adding columns to be ran 1 at a time. So we can't add multiple columns with 1 alter query. So we need to loop through all the Table.added_columns and create multiple alter queries for each column.

Compiling

Finally we need to compile the query which is simply done by doing blueprint.to_sql() which will either build a create or alter query depending on what was originally set by the Schema class before.

Introduction

Masonite ORM is a beatiful implementation that includues models, migrations, a query builder, seeds, command scaffolding, query scopes, eager loading, model relationships and many more features.

Masonite ORM currently supports MySQL, Maria, Postgres and SQLite databases.

Models

Models

Models are the easiest way to interact with your tables. A model is a way for you to interact with a Python class in a simple and elegant way and have all the hard overhead stuff handled for you under the hood. A model can be used to query the data in the table or even create new records, fetch related records between tables and many other features.

Creating A Model

The first step in using models is actually creating them. You can scaffold out a model by using the command:

You can use the --directory flag to specify the location of these models

This will create a post model like so:

From here you can do as basic or advanced queries as you want. You may need to configure your model based on your needs, though.

From here you can start querying your records:

We'll talk more about setting up your model below

Conventions And Configuration

Masonite ORM makes a few assumptions in order to have the easiest interface for your models.

The first is table names. Table names are assumed to be the plural of your model name. If you have a User model then the users table is assumed and if you have a model like Company then the companies table is assumed. You can realize that Masonite ORM is smart enough to know that the plural of Company is not Companys so don't worry about Masonite not being able to pick up your table name.

Table Name

If your table name is something other than the plural of your models you can change it using the __table__ attribute:

Primary Keys

The next thing Masonite assumes is the primary key. Masonite ORM assumes that the primary key name is id. You can change the primary key name easily:

Connections

The next thing Masonite assumes is that you are using the default connection you setup in your configuration settings. You can also change this on the model:

Mass Assignment

By default, Masonite ORM protects against mass assignment to help prevent users from changing values on your tables you didn't want.

This is used in the create and update methods. You can set the columns you want to be mass assignable easily:

Guarded attributes can be used to specify those columns which are not mass assignable. You can prevent some of the fields from being mass-assigned:

Timestamps

Masonite also assumes you have created_at and updated_at columns on your table. You can easily disable this behavior:

Timezones

Models use UTC as the default timezone. You can change the timezones on your models using the __timezone__ attribute:

Querying

Single results

A query result will either have 1 or more records. If your model result has a single record then the result will be the model instance. You can then access attributes on that model instance. Here's an example:

You can also get a record by its primary key:

Collections

If your model result returns several results then it will be wrapped in a collection instance which you can use to iterate over:

If you want to find a collection of records based on the models primary key you can pass a list to the find method:

The collection class also has some handy methods you can use to interact with your data:

Deleting

You may also quickly delete records:

This will delete the record based on the primary key value of 1.

You can also delete based on a query:

Sub-queries

You may also use sub-queries to do more advanced queries using lambda expressions:

Selecting

By default, Masonite ORM performs SELECT * queries. You can change this behavior in a few ways.

The first way is to specify a __selects__ attribute with a list of column names. You may use the as keyword to alias your columns directly from this list:

Now when you query your model, these selects will automatically be included:

Another way is directly on the all() method:

This will also work on the get method as well:

Relationships

Another great feature, when using models, is to be able to relate several models together (like how tables can relate to each other).

Belongs To (One to One)

A belongs to relationship is a one-to-one relationship between 2 table records.

You can add a one-to-one relationship easily:

It will be assumed here that the primary key of the relationship here between users and companies is id -> {method_name}_id. You can change the relating columns if that is not the case:

The first argument is always the column name on the current model's table and the second argument is the related field on the other table.

Has One (One to One)

In addition to belongs to, you can define the inverse of a belongs to:

Note the keys here are flipped. This is the only relationship that has the keys reversed

Has Many (One to Many)

Another relationship is a one-to-many relationship where a record relates to many records, in another table:

The first argument is always the column name on the current model's table and the second argument is the related field on the other table.

Has Many (Many To Many)

When working with many to many relationships, there is a pivot table in between that we must account for. Masonite ORM will handle this pivot table for you entirely under the hood.

In a real world situation you may have a scenario where you have products and stores.

Stores can have many products and also products can be in many stores. For example, a store can sell a red shirt and a red shirt can be sold in many different stores.

In the database this may look something like this:

Notice that there is a pivot table called product_store that is in between stores and products.

We can use the belongs_to_many relationship to get all the products of a store easily. Let's start with the Store model:

We can change the signature of the decorator to specify our foreign keys. In our example this would look like this:

The first 2 keys are the foreign keys relating from stores to products through the pivot table and the last 2 keys are the foreign keys on the stores and products table.

If there are additional fields on your pivot table you need to fetch you can add the extra fields to the pivot record like so:

This will fetch the additional fields on the pivot table which we have access to.

Once we create this relationship we can start querying from stores directly to products:

On each fetched record you can also get the pivot table and perform queries on it. This pivot record is the joining record inside the pivot table (product_store) where the store id and the product ID match. By default this attribute is pivot.

Changing Options

There are quite a few defaults that are created but there are ways to override them.

The first default is that the pivot table has a primary key called id. This is used to hydrate the record so you can update the pivot records. If you do not have a pivot primary key you can turn this feature off:

You can also change the ID to something other than id:

The next default is the name of the pivot table. The name of the pivot table is the singular form of both table names in alphabetical order. For example, if you are pivoting a persons table and a houses table then the table name is assumed to be house_person. You can change this naming:

The next default is that there are no timestamps (updated_at and created_at) on your pivot table. If you would like Masonite to manage timestamps you can:

The next default is that the pivot attribute on your model will be called pivot. You can change this:

Now when you need to get the pivot relationship you can do this through:

If you have timestamps on your pivot table, they must be called created_at and updated_at.

Using Relationships

You can easily use relationships to get those related records. Here is an example on how to get the company record:

With Count

The with_count method can be used to get the number of records in a relationship.

If you want to fetch the number of permissions a role has for example:

This will return a collection on each record with the {relationship}_count attribute. You can get this attribute like this:

The method also works for single records

You may also optionally pass in a lambda function as a callable to pass in an additional query filter against the relationship

Eager Loading

You can eager load any related records. Eager loading is when you preload model results instead of calling the database each time.

Let's take the example of fetching a user's phone:

This will result in the query:

This will result in a lot of database calls. Now let's take a look at the same example but with eager loading:

This would now result in this query:

This resulted in only 2 queries. Any subsquent calls will pull in the result from the eager loaded result set.

You can also default all model calls with eager loading by using the __with__ attribute on the model:

Nested Eager Loading

You may also eager load multiple relationships. Let's take another more advanced example...

Let's say you would like to get a user's phone as well as their contacts. The code would look like this:

This would result in the query:

You can see how this can get pretty large as we are looping through hundreds of users.

We can use nested eager loading to solve this by specifying the chain of relationships using . notation:

This would now result in the query:

You can see how this would result in 3 queries no matter how many users you had.

Joining

If you have relationships on your models you can easily join them:

If you have a model that like this:

You can use the joins method:

This will build out the join method.

You can also specify the clause of the join (inner, left, right). The default is an inner join

Additionally if you want to specify additional where clauses you can use the join_on method:

Scopes

Scopes are a way to take common queries you may be doing and condense them into a method where you can then chain onto them. Let's say you are doing a query like getting the active user frequently:

We can take this query and add it as a scope:

Now we can simply call the active method:

You may also pass in arguments:

then pass an argument to it:

Soft Deleting

Masonite ORM also comes with a global scope to enable soft deleting for your models.

Simply inherit the SoftDeletesMixin scope class:

Now whenever you delete a record, instead of deleting it it will update the deleted_at record from the table to the current timestamp:

When you fetch records it will also only fetch undeleted records:

You can disable this behavior as well:

You can also get only the deleted records:

You can also restore records:

Lastly, you can override this behavior and force the delete query:

You still need to add the deleted_at datetime field to your database table for this feature to work.

There is also a soft_deletes() helper that you can use in migrations to add this field quickly.

If the column name is not called deleted_at you can change the column to a different name:

Truncating

Updating

You can update records:

When updating a record, only attributes which have changes are applied. If there are no changes, update won't be triggered.

You can override this behaviour in different ways:

  • you can pass force=True to update() method

  • you can define __force_update__ attribute on the model class

  • you can use force_update() method on model:

You can also update or create records as well:

If there is a record with the username of "Joe" it will update that record or, if not present, it will create the record.

Note that when the record is created, the two dictionaries will be merged together. So if this code was to create a record it would create a record with both the username of Joe and active of 1.

When updating records the updated_at column will be automatically updated. You can control this behaviour by using activate_timestamps method:

Creating

You can easily create records by passing in a dictionary:

This will insert the record into the table, create and return the new model instance.

Note that this will only create a new model instance but will not contain any additional fields on the table. It will only have whichever fields you pass to it.

You can "refetch" the model after creating to get the rest of the record. This will use the find method to get the full record. Let's say you have a scenario in which the active flag defaults to 1 from the database level. If we create the record, the active attribute will not fetched since Masonite ORM doesn't know about this attribute.

In this case we can refetch the record using .fresh() after create:

Bulk Creating

You can also bulk create using the query builder's bulk_create method:

This will return a collection of users that have been created.

Since hydrating all the models involved in a bulk create, this could be much slower when working with a lot of records. If you are working with a lot of records then using the query builder directly without model hydrating will be faster. You can do this by getting a "new" query builder and call any required methods off that:

Serializing

You can serialize a model very quickly:

This will return a dict of all the model fields. Some important things to note:

  • Date fields will be serialized with ISO format

  • Eager loaded relationships will be serialized

  • Attributes defined in __appends__ will be added

If you want to hide model fields you can use __hidden__ attribute on your model:

In the same way you can use __visible__ attribute on your model to explicitly tell which fields should be included in serialization:

You cannot use both __hidden__ and __visible__ on the model.

Changing Primary Key to use UUID

Masonite ORM also comes with another global scope to enable using UUID as primary keys for your models.

Simply inherit the UUIDPrimaryKeyMixin scope:

You can also define a UUID column with the correct primary constraint in a migration file

Your model is now set to use UUID as a primary key. It will be automatically generated at creation.

You can change UUID version standard you want to use:

Casting

Not all data may be in the format you need it. If you find yourself casting attributes to different values, like casting active to an int then you can set it to the right type in the model:

Now whenever you get the active attribute on the model it will be an int.

Other valid values are:

  • int

  • bool

  • json

Dates

Masonite uses pendulum for dates. Whenever dates are used it will return an instance of pendulum.

You can specify which fields are dates on your model. This will be used for serializing and other logic requirements:

Overriding Dates

If you would like to change this behavior you can override 2 methods: get_new_date() and get_new_datetime_string():

The get_new_date() method accepts 1 parameter which is an instance of datetime.datetime. You can use this to parse and return whichever dates you would like.

If the datetime parameter is None then you should return the current date.

The get_new_datetime_string() method takes the same datetime parameter but this time should return a string to be used in a table.

Accessors and Mutators (Getter and Setter)

Accessors and mutators are a great way to fine tune what happens when you get and set attributes on your models.

To create an accessor we just need to create a method in the get_{name}_attribute method name:

The same thing is true for mutating, or setting, the attribute:

Events

Models emit various events in different stages of its life cycle. Available events are:

  • booting

  • booted

  • creating

  • created

  • deleting

  • deleted

  • hydrating

  • hydrated

  • saving

  • saved

  • updating

  • updated

Observers

You can listen to various events through observers. Observers are simple classes that contain methods equal to the event you would like to listen to.

For example, if you want to listen to when users are created you will create a UserObserver class that contains the created method.

You can scaffold an obsever by running:

If you do not specify a model option, it will be assumed the model name is the same as the observer name

Once the observer is created you can add your logic to the event methods:

The model object receieved in each event method will be the model at that point in time.

You may then set the observer to a specific model.

If you are using Masonite, this could be done in a service provider:

If you are using Masonite ORM outside of Masonite you can simply do this at the bottom of the model definition:

Related Records

There are many times you need to take several related records and assign them all to the same attribute based on another record.

For example, you may have articles you want to switch the authors of.

For this you can use the attach and save_many methods. Let's say you had a User model that had a articles method that related to the Articles model.

This will take all articles where user_id is 2 and assign them the related record between users and article (user_id).

You may do the same for a one-to-one relationship:

Attributes

There are a few attributes that are used for handling model data.

Dirty Attributes

When you set an attribute on a model, the model becomes "dirty". Meaning the model now has attributes changed on it. You can easily check if the model is dirty:

You specifically get a dirty attribute:

This will get the value of the dirty attribute and not the attribute that was set on the model.

Original

This keeps track of the original data that was first set on the model. This data does not change throughout the life of the model:

Saving

Once you have set attributes on a model, you can persist them up to the table by using the save method:

Orator To Masonite ORM

Orator To Masonite ORM Guide

This guide will explain how to move from Orator to Masonite ORM. Masonite ORM was made to be pretty much a straight port of Orator but allow the Masonite organization complete creative control of the ORM.

Orator has since been abandoned and Masonite needed a good ORM to keep fresh features and security up to date with the ORM.

Before moving your project over to Masonite ORM please keep in mind some features are not (_at least currently)_ ported over from Orator. These are features that may be ported over in the future.

This list is a continuously evolving list of features and anything we develop will be removed from the list. These features are planned but not yet finished.

Currently these features are:

  • has one through relationship

If you are using Masonite 2 then you will not be able to upgrade to Masonite ORM because of version conflicts between Masonite and Masonite 2 ORM.

Config

The configuration dictionary between Orator and Masonite ORM is identical. The only difference is that Masonite ORM requires a config/database.py file whereas Orator was optional and needed to be explicitly specified in several places like commands.

If you are coming from Masonite already then don't worry, this file is already there. If not you will need to create this config/database.py file.

This is an example of a Masonite ORM config dictionary:

The other thing you will need to do is change the resolver classes. Orator has a configuration structure like this:

Masonite ORM those same resolver classes looks like this:

Models

Models are identical but the imports are different. Orator requires you to set the model resolver from the configuration file and then you import that model.

In Masonite ORM you import the model directly:

Scopes

Scopes are also identical but the import changes:

Fetching builder relations

In Orator you could do this:

This would delay the relationship call and would instead append the builder before returning the result.

The above call in Masonite ORM becomes:

Collections

Anytime your results return multiple values then an instance of Collection is returned. This allows you to iterate over your values and has a lot of shorthand methods.

When using collections as a query result you can iterate over it as if the collection with a normal list:

Available Methods

all

Returns the underlying list or dict represented by the collection:

avg

Returns the average of all items in the collection:

If the collection contains nested objects or dictionaries (e.g. for a collection of models), you must pass a key to use for determining which values to calculate the average:

chunk

Chunks a collection into multiple, smaller collections of a given size. Uses a generator to keep each chunk small. Useful for chunking large data sets where pulling too many results in memory will overload the application.

collapse

Collapses a collection of lists into a flat collection:

contains

Determines whether the collection contains a given item:

You can also pass a key / value pair to the contains method, which will determine if the given pair exists in the collection.

Finally, you may also pass a callback to the contains method to perform your own truth test:

count

Returns the total number of items in the collection. len() standard python method can also be used.

diff

Returns the difference as a collection against another collection

each

Iterates over the items in the collection and passes each item to a given callback:

every

Creates a new collection by applying a given callback on every element:

filter

Filters the collection by a given callback, keeping only those items that pass a given truth test:

first

Returns the first item of the collection, if no arguments are given.

When given a truth test as callback, it returns the first element in the collection that passes the test:

flatten

Flattens a multi-dimensional collection into a single dimension:

forget

Removes an item from the collection by its key:

Unlike most other collection methods, forget does not return a new modified collection; it modifies the collection it is called on.

for_page

Paginates the collection by returning a new collection containing the items that would be present on a given page number:

for_page(page, count) takes the page number and the number of items to show per page.

get

Returns the item at a given key or index. If the key does not exist, None is returned. An optional default value can be passed as the second argument:

group_by

Returns a collection where items are grouped by the given key:

implode

Joins the items in a collection with , or the given glue string.

If the collection contains dictionaries or objects, you must pass the key of the attributes you wish to join:

is_empty

Returns True if the collection is empty; otherwise, False is returned:

last

Returns the last element in the collection if no arguments are given.

Returns the last element in the collection that passes the given truth test:

map

Iterates through the collection and passes each value to the given callback. The callback is free to modify the item and return it, thus forming a new collection of modified items:

map_into

Iterates through the collection and cast each value into the given class:

A class method can also be specified. Some additional keywords arguments can be passed to this method:

max

Retrieves max value of the collection:

If the collection contains dictionaries or objects, you must pass the key on which to compute max value:

merge

Merges the given list into the collection:

Unlike most other collection methods, merge does not return a new modified collection; it modifies the collection it is called on.

pluck

Retrieves all of the collection values for a given key:

A key can be given to pluck the collection into a dictionary with the given key

You can pass keep_nulls=False to remove None value in the collection.

pop

Removes and returns the last item from the collection:

prepend

Adds an item to the beginning of the collection:

pull

Removes and returns an item from the collection by its key:

push

Appends an item to the end of the collection:

put

Sets the given key and value in the collection:

random

Returns a random item from the collection

An integer count can be given to random method to specify how many items you would like to randomly retrieve from the collection. A collection will always be returned when the items count is specified

If the collection length is smaller than specified count a ValueError will be raised.

reduce

Reduces the collection to a single value, passing the result of each iteration into the subsequent iteration.

Initial value is 0 by default but can be overridden:

reject

Unlike most other collection methods, reject does not return a new modified collection; it modifies the collection it is called on.

reverse

Reverses the order of the items in the collection:

Unlike most other collection methods, reverse does not return a new modified collection; it modifies the collection it is called on.

serialize

shift

Removes and returns the first item from the collection:

sort

Sorts the collection:

sum

Returns the sum of all items in the collection:

If the collection contains dictionaries or objects, you must pass a key to use for determining which values to sum:

take

Returns a new collection with the specified number of items:

You can also pass a negative integer to take the specified amount of items from the end of the collection:

to_json

Converts the collection into JSON:

transform

Iterates over the collection and calls the given callback with each item in the collection. The items in the collection will be replaced by the values returned by the callback:

unique

Returns all of the unique items in the collection:

When dealing with dictionaries or objects, you can specify the key used to determine uniqueness:

where

Filters the collection by a given key / value pair:

zip

Merges together the values of the given list with the values of the collection at the corresponding index:

Installation

Setting up Masonite is extremely simple.

If you are using the Masonite web framework than all the installation is setup for you. If you are using anything other than Masonite or building your own Python application then be sure to follow the install steps below:

Pip install

First install via pip:

Configuration File

To start configuring your project you'll need a config/database.py file. In this file we will be able to put all our connection information.

One we have our config/database.py file we can put a DATABASES variable with a dictionary of connection details. Each key will be the name of our connection. The connection name can be whatever you like and does not need to relate to a database name. Common connection names could be something like dev, prod and staging. Feel free to name these connections whatever you like.

The connection variable will look something like this

Lastly you will need to import the ConnectionResolver class and and register the connection details. Normal convention is to set this to a variable called DB:

After this you have successfully setup Masonite ORM in your project!

MSSQL

Masonite ORM supports Microsoft SQL Server and several options to modify the connection string. All available options are:

Transactions

You can use global level database transactions easily by importing the connection resolver class:

You can then either rollback or commit the transactions:

You may also optionally pass the connection you'd like to use:

You can also use the transaction as a context manager:

If there are any exceptions in inside the context then the transaction will be rolled back. Else it will commit the transaction.

Logging

If you would like, you can log any queries Masonite ORM generates to any supported Python logging handler. First you need to enable logging in config/database.py file through the log_queries boolean parameter.

Inside your config/database.py file you can put on the bottom here. The StreamHandler will output the queries to the terminal.

You can specify as many handlers as you would like. Here's an example of logging to both the terminal and a file:

Raw Queries

You can query the database directly using the connection resolver class. If you set the connection resolver to the variable DB you can import it like:

You may also pass query bindings as well to protect against SQL injection by passing a list of bindings:

This will use the default connection but you may also optionally pass a connection to use:

If you would like to use models you should reference the documentation. This is an example of using models directly with the query builder.

You can contribute to the project at the

Masonite ORM was built for the but is built to work in any Python project. It is heavily inspired by the Orator Python ORM and is designed to be a drop in replacement for Orator. Orator was inspired by Laravel's Eloquent ORM so if you are coming from a framework like Laravel or Ruby On Rails you should see plenty of similiarities between this project and Eloquent or Active Record.

Almost all of a model's querying methods are passed off to the query builder. If you would like to see all the methods available for the query builder, see the documentation here.

If you would like to see more methods available like pluck be sure to read the documentation.

You can used by the model directly on the model:

If you need more advanced serialization or building a complex API you should use package.

If you want to transform the original collection, use the method.

It's the inverse of method. It filters the collection using the given callback. The callback should return True for any items to remove from the resulting collection:

Converts the collection into a list. If the collection’s values are , the models will also be converted to dictionaries:

Be careful, serialize also converts all of its nested objects. If you want to get the underlying items as is, use the method instead.

If you wish to create a new collection instead, use the method.

Models
user = User
user #== <class User>
user.where('id', 1) #== <masonite.orm.Querybuilder object>
user.where('id', 1).where('active', 1) #== <masonite.orm.Querybuilder object>
user.select('id').where('id', 1).where('active', 1).get() #== <masonite.orm.Collection object>
SELECT `id` FROM `users` WHERE `id` = '1' AND `active` = 1
SELECT `id` FROM `users` WHERE `id` = '?' AND `active` = '?'
(1,1)
SELECT * FROM `users` where `age` = '18'
SELECT * FROM `users` where `age` = '?'
SELECT * from `users` LIMIT 1
SELECT TOP 1 * from `users`
def select_format(self):
    return "SELECT {columns} FROM {table} {limit}"
def select_format(self):
    return "SELECT {limit} {columns} FROM {table}"
# MySQLGrammar 

def limit_string(self):
  return "LIMIT {limit}"
# MSSQLGrammar

def limit_string(self):
  return "TOP {limit}"
# Everything completely abstracted into it's own class and class methods.
sql = self.select_format().format(
    columns=self.process_columns(),
    table=self.process_table(),
    limit=self.process_limit()
)
"SELECT {columns} FROM {table} {limit}".format(
    columns="*",
    table="`users`",
    limit="LIMIT 1"
)
#== 'SELECT * FROM `users` LIMIT 1'
"SELECT {limit} {columns} FROM {table} ".format(
    columns="*",
    table="`users`",
    limit="TOP 1"
)
#== 'SELECT TOP 1 * FROM `users`'
`users`
"users"
[users]
table = self.table_string().format(table=table)
# MySQL
table = "`{table}`".format(table=table)
# MSSQL
table = "[{table}]".format(table=table)
def _compile_select(self):
    "SELECT {columns} FROM {table} {limit}".format(
        columns="*",
        table="`users`",
        limit="LIMIT 1"
    )
#== 'SELECT * FROM `users` LIMIT 1'
def _compile_select(self):
    "SELECT {columns} FROM {table} {wheres} {limit}".format(
        columns=self.process_columns(),
        table=self.process_from(),
        limit=self.process_limit()
        wheres=self.process_wheres
    )

    #== 'SELECT * FROM `users` LIMIT 1'
def _compile_select(self):
    self.select_format().format(
        columns=self.process_columns(),
        table=self.process_from(),
        limit=self.process_limit()
        wheres=self.process_wheres
    )
    #== 'SELECT * FROM `users` LIMIT 1'
class User(Model):
    pass
result = User().where('...')
result = User.where('...')
user = User.where('age', 18)
#== <masonite.orm.QueryBuilder object>
user = User.where('age', 18).where('name', 'Joe').limit(1)
#== <masonite.orm.QueryBuilder object x100>
user = User.where('age', 18).where('name', 'Joe').limit(1).get()
#== <masonite.orm.Collection object x101>
user = User.where('age', 18).where('name', 'Joe').limit(1).first()
#== <app.User object x100>
class User:

    @belongs_to('local_key', 'foreign_key')
    def profile(self):
        return Profile
user = User.find(1)
user.profile.city
user = User.find(1)
user.profile().city
CREATE TABLE `table` (
    `name` VARCHAR(255)
)
Schema.table('users') as blueprint:
    blueprint.string('name')
    blueprint.integer('age')
Schema.table('users') as blueprint:
    blueprint.string('name')
table.add_column('name', column_type='string')
$ python masonite-orm model Post
from masoniteorm.models import Model

class Post(Model):
    """Post Model"""
    pass
user = User.first()
users = User.all()
active_users = User.where('active', 1).first()
class Clients:
  __table__ = "users"
class Clients:
  __primary_key__ = "user_id"
class Clients:
  __connection__ = "staging"
class Clients:
  __fillable__ = ["email", "active", "password"]
class Clients:
  __guarded__ = ["password"]
class Clients:
  __timestamps__ = False
class User(Model):
    __timezone__ = "Europe/Paris"
from app.models.User import User

user = User.first()
user.name #== 'Joe'
user.email #== 'joe@masoniteproject.com'
from app.models.User import User

user = User.find(1)
user.name #== 'Joe'
user.email #== 'joe@masoniteproject.com'
from app.models.User import User

users = User.where('active', 1).get()
for user in users:
  user.name #== 'Joe'
  user.active #== '1'
  user.email #== 'joe@masoniteproject.com'
users = User.find([1,2,3])
for users in users:
  user.name #== 'Joe'
  user.active #== '1'
  user.email #== 'joe@masoniteproject.com'
user_emails = User.where('active', 1).get().pluck('email') #== Collection of email addresses
from app.models.User import User

user = User.delete(1)
from app.models.User import User

user = User.where('active', 0).delete()
from app.models.User import User

users = User.where(lambda q: q.where('active', 1).where_null('deleted_at'))
# == SELECT * FROM `users` WHERE (`active` = '1' AND `deleted_at` IS NULL)
class Store(Model):
    __selects__ = ["username", "administrator as is_admin"]
store.all() 
#== SELECT `username`, `administrator` as is_admin FROM `users`
store.all(["username", "administrator as is_admin"]) 
#== SELECT `username`, `administrator` as is_admin FROM `users`
store.where("active", 1).get(["username", "administrator as is_admin"]) 
#== SELECT `username`, `administrator` as is_admin FROM `users` WHERE `active` = 1
from masoniteorm.relationships import belongs_to
class User:

  @belongs_to
  def company(self):
    from app.models.Company import Company
    return Company
from masoniteorm.relationships import belongs_to
class User:

  @belongs_to('primary_key_id', 'user_id')
  def company(self):
    from app.models.Company import Company
    return Company
from masoniteorm.relationships import has_one
class User:

  @has_one
  def company(self):
    from app.models.Company import Company
    return Company
from masoniteorm.relationships import has_one
class User:

  @has_one('other_key', 'local_key')
  def company(self):
    from app.models.Company import Company
    return Company
from masoniteorm.relationships import has_many
class User:

  @has_many('company_id', 'id')
  def posts(self):
    from app.models.Post import Post
    return Post
stores
-------
id
name

product_store
--------------
id
store_id
product_id

product
--------
id
name
from masoniteorm.models import Model
from masoniteorm.relationships import belongs_to_many
class Store(Model):

  @belongs_to_many
  def products(self):
    from app.models.Product import Product
    return Product
from masoniteorm.models import Model
from masoniteorm.relationships import belongs_to_many
class Store(Model):

  @belongs_to_many("store_id", "product_id", "id", "id")
  def products(self):
    from app.models.Product import Product
    return Product
@belongs_to_many("store_id", "product_id", "id", "id", extra_fields=['is_active'])
  def products(self):
    from app.models.Product import Product
    return Product
store = Store.find(1)
for product in store.products:
    product.name #== Red Shirt
store = Store.find(1)
for product in store.products:
    product.pivot.updated_at #== 2021-01-01
    product.pivot.update({"updated_at": "2021-01-02"})
@belongs_to_many(pivot_id=None)
@belongs_to_many(pivot_id="other_column")
@belongs_to_many(table="home_ownership")
@belongs_to_many(with_timestamps=True)
@belongs_to_many(attribute="ownerships")
store = Store.find(1)
for product in store.products:
    product.ownerships.updated_at #== 2021-01-01
    product.ownerships.update({"updated_at": "2021-01-02"})
user = User.first()
user.company #== <app.models.Company>
user.company.name #== Masonite X Inc.

for post in user.posts:
    post.title
Role.with_count('permissions').get()
roles = Role.with_count('permissions').get()
for role in roles:
  role.permissions_count #== 7
roles = Role.with_count('permissions').find(1).permissions_count #== 7
Role.with_count(
    'permissions',
    lambda q: (
        q.where_like("name", "%Creates%")
     )
users = User.all()
for user in users:
    user.phone
SELECT * FROM users
SELECT * FROM phones where user_id = 1
SELECT * FROM phones where user_id = 2
SELECT * FROM phones where user_id = 3
SELECT * FROM phones where user_id = 4
...
users = User.with_('phone').get()
for user in users:
    user.phone
SELECT * FROM users
SELECT * FROM phones where user_id IN (1, 2, 3, 4)
from masoniteorm.models import Model
from masoniteorm.relationships import belongs_to_many
class Store(Model):

  __with__ = ['products']

  @belongs_to_many
  def products(self):
    from app.models.Product import Product
    return Product
users = User.all()
for user in users:
    for contact in user.phone:
        contact.name
SELECT * FROM users
SELECT * FROM phones where user_id = 1
SELECT * from contacts where phone_id = 30
SELECT * FROM phones where user_id = 2
SELECT * from contacts where phone_id = 31
SELECT * FROM phones where user_id = 3
SELECT * from contacts where phone_id = 32
SELECT * FROM phones where user_id = 4
SELECT * from contacts where phone_id = 33
...
users = User.with_('phone.contacts').all()
for user in users:
    for contact in user.phone:
        contact.name
SELECT * FROM users
SELECT * FROM phones where user_id IN (1,2,3,4)
SELECT * from contacts where phone_id IN (30, 31, 32, 33)
from masoniteorm.relationships import has_many
class User:

  @has_many('company_id', 'id')
  def posts(self):
    from app.models.Post import Post
    return Post
User.joins('posts')
User.joins('posts', clause="right")
User.join_on('posts', lambda q: (
  q.where('active', 1)
))
user = User.where('active', 1).get()
from masoniteorm.scopes import scope
class User(Model):

  @scope
  def active(self, query):
    return query.where('active', 1)
user = User.active().get()
from masoniteorm.scopes import scope
class User(Model):

  @scope
  def active(self, query, active_or_inactive):
    return query.where('active', active_or_inactive)
user = User.active(1).get()
user = User.active(0).get()
from masoniteorm.scopes import SoftDeletesMixin

class User(Model, SoftDeletesMixin):
  # ..
User.delete(1)
# == UPDATE `users` SET `deleted_at` = '2020-01-01 10:00:00' WHERE `id` = 1
User.all() #== SELECT * FROM `users` WHERE `deleted_at` IS NULL
User.with_trashed().all() #== SELECT * FROM `users`
User.only_trashed().all() #== SELECT * FROM `users` WHERE `deleted_at` IS NOT NULL
User.where('admin', 1).restore() #== UPDATE `users` SET `deleted_at` = NULL WHERE `admin` = '1'
User.where('admin', 1).force_delete() #== DELETE FROM `users` WHERE `admin` = '1'
# user migrations
with self.schema.create("users") as table:
  # ...
  table.soft_deletes()
from masoniteorm.scopes import SoftDeletesMixin

class User(Model, SoftDeletesMixin):
  __deleted_at__ = "when_deleted"
User.truncate()
User.find(1).update({"username": "Joe"}, {'active': 1})
User.find(1).update({"username": "Joe"}, force=True)
class User(Model):
    __force_update__ = True

User.find(1).update({"username": "Joe"})
User.find(1).force_update({"username": "Joe"})
User.update_or_create({"username": "Joe"}, {
    'active': 1
})
User.activate_timestamps(False).update({"username": "Sam"})  # updated_at won't be modified during this update
User.create({"username": "Joe"})
user = User.create({"username": "Joe"}).fresh()

user.active #== 1
User.bulk_create([
  {"username": "Joe"},
  {"username": "John"},
  {"username": "Bill"},
  {"username": "Nick"},
])
User.builder.new().bulk_create([
  {"username": "Joe"},
  {"username": "John"},
  {"username": "Bill"},
  {"username": "Nick"},
])
User.serialize()
# returns {'id': 1, 'account_id': 1, 'first_name': 'John', 'last_name': 'Doe', 'email': 'johndoe@example.com', 'password': '$2b$12$pToeQW/1qs26CCozNiAfNugRRBNjhPvtIw86dvfJ0FDNcTDUNt3TW', 'created_at': '2021-01-03T11:35:48+00:00', 'updated_at': '2021-01-08T22:06:48+00:00' }
# User.py
class User(Model):
  # ...
  __hidden__ = ["password", "created_at"]
# User.py
class User(Model):
  # ...
  __visible__ = ["id", "name", "email"]
from masoniteorm.scopes import UUIDPrimaryKeyMixin

class User(Model, UUIDPrimaryKeyMixin):
  # ..
with self.schema.create("users") as table:
    table.uuid('id')
    table.primary('id')
import uuid
from masoniteorm.scopes import UUIDPrimaryKeyMixin

class User(Model, UUIDPrimaryKeyMixin):
  __uuid_version__ = 3
  # the two following parameters are only needed for UUID 3 and 5
  __uuid_namespace__ = uuid.NAMESPACE_DNS
  __uuid_name__ = "domain.com
class User(Model):
  __casts__ = {"active": "int"}
class User(Model):

    __dates__ = ["verified_at"]
class User(Model):

    def get_new_date(self, datetime=None):
        # return new instance from datetime instance.
class User(Model):

    def get_new_datetime_string(self, datetime=None):
        return self.get_new_date(datetime).to_datetime_string()
class User:

    def get_name_attribute(self):
        return self.first_name + ' ' + self.last_name

user = User.find(1)
user.first_name #== "Joe"
user.last_name #== "Mancuso"
user.name #== "Joe Mancuso"
class User:

    def set_name_attribute(self, attribute):
        return str(attribute).upper()

user = User.find(1)
user.name = "joe mancuso"
user.name #== "JOE MANCUSO"
masonite-orm observer User --model User
class UserObserver:
    def created(self, user):
        pass

    def creating(self, user):
        pass

    #..
from app.models.User import User
from app.observers.UserObserver import UserObserver
from masonite.providers import Provider

class ModelProvider(Provider):

    def boot(self):
        User.observe(UserObserver())
        #..
from masoniteorm.models import Model
from some.place.UserObserver import UserObserver

class User(Model):
    #..
    
User.observe(UserObserver())
user = User.find(1)
articles = Articles.where('user_id', 2).get()

user.save_many('articles', articles)
user = User.find(1)
phone = Phone.find(30)

user.attach('phone', phone)
user = User.find(1)
user.is_dirty() #== False
user.name = "Joe"
user.is_dirty() #== True
user = User.find(1)
user.name #== Bill
user.name = "Joe"
user.get_dirty("name") #== Joe
user = User.find(1)
user.name #== Bill
user.name = "Joe"
user.get_original("name") #== Bill
user = User.find(1)
user.name #== Bill
user.name = "Joe"
user.save()
import os

DATABASES = {
    'default': 'mysql',
    'mysql': {
        'driver': 'mysql',
        'host': os.getenv('MYSQL_DATABASE_HOST'),
        'user': os.getenv('MYSQL_DATABASE_USER'),
        'password': os.getenv('MYSQL_DATABASE_PASSWORD'),
        'database': os.getenv('MYSQL_DATABASE_DATABASE'),
        'port': os.getenv('MYSQL_DATABASE_PORT'),
        'prefix': '',
        'options': {
            'charset': 'utf8mb4',
        },
        'log_queries': True
    },
    'postgres': {
        'driver': 'postgres',
        'host': os.getenv('POSTGRES_DATABASE_HOST'),
        'user': os.getenv('POSTGRES_DATABASE_USER'),
        'password': os.getenv('POSTGRES_DATABASE_PASSWORD'),
        'database': os.getenv('POSTGRES_DATABASE_DATABASE'),
        'port': os.getenv('POSTGRES_DATABASE_PORT'),
        'prefix': '',
        'log_queries': True
    },
    'sqlite': {
        'driver': 'sqlite',
        'database': 'orm.sqlite3',
        'prefix': '',
        'log_queries': True
    },
    'mssql': {
        'driver': 'mssql',
        'host': os.getenv('MSSQL_DATABASE_HOST'),
        'user': os.getenv('MSSQL_DATABASE_USER'),
        'password': os.getenv('MSSQL_DATABASE_PASSWORD'),
        'database': os.getenv('MSSQL_DATABASE_DATABASE'),
        'port': os.getenv('MSSQL_DATABASE_PORT'),
        'prefix': '',
        'log_queries': True
    },
}
from orator import DatabaseManager, Model

DATABASES = {
  # ...
}

DB = DatabaseManager(DATABASES)
Model.set_connection_resolver(DB)
from masoniteorm.connections import ConnectionResolver

DATABASES = {
  # ...
}

DB = ConnectionResolver().set_connection_details(DATABASES)
# Masonite
from masoniteorm.models import Model

class User(Model):
    pass
# Orator
from orator.orm import scope

class User(Model):

  @scope
  def popular(self, query):
        return query.where('votes', '>', 100)
# Masonite
from masoniteorm.scopes import scope

class User(Model):

  @scope
  def popular(self, query):
        return query.where('votes', '>', 100)
user = User.find(1)
user.phone().where('active', 1).get()
user = User.find(1)
user.related('phone').where('active', 1).get()
users = User.get() #== <masoniteorm.collections.Collection>
users.count() #== 50
users.pluck('email') #== <masoniteorm.collections.Collection> of emails

for user in users:
  user.email #== 'joe@masoniteproject.com'
users = User.get().all() #== [<app.User.User>, <app.User.User>]

Collection([1, 2, 3]).all() #== [1, 2, 3]
Collection([1, 2, 3, 4, 5]).avg() #== 3
average_price = Product.get().avg('price')
collection = Collection([1, 2, 3, 4, 5, 6, 7])
chunks = collection.chunk(2).serialize() #== [[1, 2], [3, 4], [5, 6], [7]]
collection = Collection([[1, 2, 3], [4, 5, 6])
collection.collapse().serialize() #== [1, 2, 3, 4, 5, 6]
collection = Collection(['foo', 'bar'])
collection.contains('foo') #== True
collection = Collection([1, 2, 3, 4, 5])
collection.contains(lambda item: item > 5) #== False
collection = Collection([1, 2, 3, 4, 5])
diff = collection.diff([2, 4, 6, 8])
diff.all() #== [1, 3, 5]
posts.each(lambda post: post.author().save(author))
collection = Collection([1, 2, 3])
collection.every(lambda x: x*2 ).all() #== [2, 4, 6]
collection = Collection([1, 2, 3, 4])
filtered = collection.filter(lambda item: item > 2)
filtered.all() #== [3, 4]
collection = Collection([1, 2, 3, 4])
collection.first(lambda item: item > 2)
collection = Collection([1, 2, [3, 4, 5, {'foo': 'bar'}]])
flattened = collection.flatten().all() #== [1, 2, 3, 4, 5, 'bar']
collection = Collection([1, 2, 3, 4, 5])
collection.forget(1).all() #== [1,3,4,5]
collection.forget(0,2).all() #== [3,5]
collection = Collection([1, 2, 3, 4, 5, 6, 7, 8, 9])
chunk = collection.for_page(2, 4).all() #== 4, 5, 6, 7
collection = Collection([1, 2, 3])
collection.get(0) #== 1
collection.get(4) #== None
collection.get(4, 'default') #== 'default'

collection = Collection({"apples": 1, "cherries": 2})
collection.get("apples") #== 1
collection = Collection([
  {"id": 1, "type": "a"},
  {"id": 2, "type": "b"},
  {"id": 3, "type": "a"}
])
collection.implode("type").all()
#== {'a': [{'id': 1, 'type': 'a'}, {'id': 4, 'type': 'a'}],
#    'b': [{'id': 2, 'type': 'b'}]}
collection = Collection(['foo', 'bar', 'baz'])
collection.implode() #== foo,bar,baz
collection.implode('-') #== foo-bar-baz
collection = Collection([
    {'account_id': 1, 'product': 'Desk'},
    {'account_id': 2, 'product': 'Chair'}
])
collection.implode(key='product') #== Desk,Chair
collection.implode(" - ", key='product') #== Desk - Chair
Collection([]).is_empty() #== True
collection = Collection([1, 2, 3, 4])
last = collection.last(lambda item: item < 3) #== 2
collection = Collection([1, 2, 3, 4])
multiplied = collection.map(lambda item: item * 2).all() #== [2, 4, 6, 8]
collection = Collection([1,2])
collection.map_into(str).all() #== ["1", "2"]
class Point:
    @classmethod
    def as_dict(cls, coords, one_dim=False):
        if one_dim:
            return {"X": coords[0]}
        return {"X": coords[0], "Y": coords[1]}

collection = Collection([(1,2), (3,4)])
collection.map_into(Point, "as_dict") #== [{'X': 1, 'Y': 2}, {'X': 3, 'Y': 4}]
collection.map_into(Point, "as_dict", one_dim=True) #== [{'X': 1}, {'X': 3}]
collection = Collection([1,2,3])
collection.max() #== 3
collection = Collection([
    {'product_id': 1, 'product': 'Desk'},
    {'product_id': 2, 'product': 'Chair'}
    {'product_id': 3, 'product': 'Table'}
])
collection.max("product_id") #== 3
collection = Collection(['Desk', 'Chair'])
collection.merge(['Bookcase', 'Door'])
collection.all() #== ['Desk', 'Chair', 'Bookcase', 'Door']
collection = Collection([
    {'product_id': 1, 'product': 'Desk'},
    {'product_id': 2, 'product': 'Chair'}
    {'product_id': 3, 'product': None}
])

plucked = collection.pluck('product').all() #== ['Desk', 'Chair', None]
collection.pluck("product", "product_id") #== {1: 'Desk', 2: 'Chair', 3: None}
collection.pluck("product", keep_nulls=False) #== ['Desk', 'Chair']
collection = Collection([1, 2, 3, 4, 5])
collection.pop() #== 5
collection.all() #== [1, 2, 3, 4]
collection = Collection([1, 2, 3, 4])
collection.prepend(0)
collection.all() #== [0, 1, 2, 3, 4]
collection = Collection([1, 2, 3, 4])
collection.pull(1) #== 2
collection.all() #== [1, 3, 4]

collection = Collection({'apple': 1, 'cherry': 3, 'lemon': 2})
collection.pull('cherry') #== 3
collection.all() #== {'apple': 1, 'lemon': 2}
collection = Collection([1, 2, 3, 4])
collection.push(5)
collection.all() #== [1, 2, 3, 4, 5]
collection = Collection([1, 2, 3, 4])
collection.put(1, 5)
collection.all() #== [1, 5, 3, 4]

collection = Collection({'apple': 1, 'cherry': 3, 'lemon': 2})
collection.put('cherry', 0)
collection.all() #== {'apple': 1, 'cherry': 0, 'lemon': 2}
user = User.all().random() #== returns a random User instance
users = User.all().random(3) #== returns a Collection of 3 users
users.count() #== 3
users.all() #== returns a list of 3 users
collection = Collection([1, 2, 3])
collection.reduce(lambda result, item: (result or 0) + item) #== 6
collection.reduce(lambda result, item: (result or 0) + item, 4) #== 10
collection = Collection([1, 2, 3, 4])
filtered = collection.reject(lambda item: item > 2)
filtered.all() #== [1, 2]
collection = Collection([1, 2, 3])
collection.reverse().all() #== [3, 2, 1]
collection = Collection([1, 2, 3])
collection.serialize() #== [1, 2, 3]

collection = Collection([User.find(1)])
collection.serialize() #== [{'id': 1, 'name': 'John', 'email': 'john.doe@masonite.com'}]
collection = Collection([1, 2, 3, 4, 5])
collection.shift() #== 1
collection.all() #== [2, 3, 4, 5]
collection = Collection([5, 3, 1, 2, 4])
sorted = collection.sort()
sorted.all() #== [1, 2, 3, 4, 5]
Collection([1, 2, 3, 4, 5]).sum() #== 15
collection = Collection([
    {'name': 'JavaScript: The Good Parts', 'pages': 176},
    {'name': 'JavaScript: The Defnitive Guide', 'pages': 1096}
])
collection.sum('pages') #== 1272
collection = Collection([0, 1, 2, 3, 4, 5])
chunk = collection.take(3)
chunk.all() #== [0, 1, 2]
chunk = collection.chunk(-2)
chunk.all() #== [4, 5]
collection = Collection([{'name': 'Desk', 'price': 200}])
collection.to_json() #== '[{"name": "Desk", "price": 200}]'
collection = Collection([1, 2, 3, 4, 5])
collection.transform(lambda item: item * 2)
collection.all() #== [2, 4, 6, 8, 10]
collection = Collection([1, 1, 2, 2, 3, 4, 2])
unique = collection.unique()
unique.all() #== [1, 2, 3, 4]
collection = Collection([
    {'name': 'Sam', 'role': 'admin'},
    {'name': 'Joe', 'role': 'basic'},
    {'name': 'Joe', 'role': 'admin'},
])
unique = collection.unique('name')
unique.all()
# [
#     {'name': 'Sam', 'role': 'admin'},
#     {'name': 'Joe', 'role': 'basic'}
# ]
collection = Collection([
    {'name': 'Desk', 'price': 200},
    {'name': 'Chair', 'price': 100},
    {'name': 'Bookcase', 'price': 150},
    {'name': 'Door', 'price': 100},
])
filtered = collection.where('price', 100)
filtered.all()
# [
#     {'name': 'Chair', 'price': 100},
#     {'name': 'Door', 'price': 100}
# ]
collection = Collection(['Chair', 'Desk'])
zipped = collection.zip([100, 200])
zipped.all() #== [('Chair', 100), ('Desk', 200)]
$ pip install masonite-orm
# config/database.py
DATABASES = {
  "default": "mysql",
  "mysql": {
    "host": "127.0.0.1",
    "driver": "mysql",
    "database": "masonite",
    "user": "root",
    "password": "",
    "port": 3306,
    "log_queries": False,
    "options": {
      #  
    }
  },  
  "postgres": {
    "host": "127.0.0.1",
    "driver": "postgres",
    "database": "masonite",
    "user": "root",
    "password": "",
    "port": 5432,
    "log_queries": False,
    "options": {
      #  
    }
  },
  "sqlite": {
    "driver": "sqlite",
    "database": "masonite.sqlite3",
  }
}
# config/database.py
from masoniteorm.connections import ConnectionResolver

DATABASES = {
  # ...
}

DB = ConnectionResolver().set_connection_details(DATABASES)
"mssql": {
    "host": "127.0.0.1",
    "driver": "mssql",
    "database": "masonite",
    "user": "root",
    "password": "",
    "port": 1433,
    "log_queries": False,
    "options": {
      "trusted_connection": "Yes",
      "integrated_security": "sspi",
      "instance": "SQLExpress",
      "authentication": "ActiveDirectoryPassword",
      "driver": "ODBC Driver 17 for SQL Server",
      "connection_timeout": 15,
    }
  },
from config.database import DB

DB.begin_transaction()
User.create({..})
DB.commit()
DB.rollback()
DB.begin_transaction("staging")
DB.commit("staging")
DB.rollback("staging")
with DB.transaction():
  User.create({..})
logger = logging.getLogger('masoniteorm.connection.queries')
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()

logger.addHandler(handler)
logger = logging.getLogger('masoniteorm.connection.queries')
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()
file_handler = logging.FileHandler('queries.log')

logger.addHandler(handler)
logger.addHandler(file_handler)
from config.database import DB

result = DB.statement("select * from users where users.active = 1")
from config.database import DB

result = DB.statement("select * from users where users.active = '?'", [1])
from config.database import DB

result = DB.statement("select * from users where users.active = '?'", [1], connection="production")
Masonite ORM Repository
Masonite Web Framework
QueryBuilder
Collections
masonite-api
ORM models
truncate the table
transform
filter
all
map

Schema & Migrations

Migrations are used to build and modify your database tables. This is done through use of migration files and the Schema class. Migration files are really just wrappers around the Schema class as well as a way for Masonite to manage which migrations have run and which ones have not.

Creating Migrations

Creating migrations are easy with the migration commands. To create one simply run:

$ masonite-orm migration migration_for_users_table

This will create a migration file for you and put it in the databases/migrations directory.

If you want to create a starter migration, that is a migration with some boilerplate of what you are planning to do, you can use the --table and --create flag:

$ masonite-orm migration migration_for_users_table --create users

This will setup a migration for you with some boiler plate on creating a new table

$ masonite-orm migration migration_for_users_table --table users

This will setup a migration for you for boiler plate on modifying an existing table.

Building Migrations

To start building up your migration, simply modify the up method and start adding any of the available methods below to your migration.

A simple example would look like this for a new table:

class MigrationForUsersTable(Migration):
    def up(self):
        """
        Run the migrations.
        """
        with self.schema.create("users") as table:
            table.increments('id')
            table.string('username')
            table.string('email').unique()
            table.string('password')
            table.boolean('is_admin')
            table.integer('age')

            table.timestamps()

    def down(self):
        """
        Revert the migrations.
        """
        self.schema.drop("users")

Available Methods

Command
Description

table.string()

The varchar version of the table. Can optional pass in a length table.string('name', length=181)

table.char()

CHAR equivalent column.

table.text()

TEXT equivalent column.

table.longtext()

LONGTEXT equivalent column.

table.integer()

The INT version of the database. Can also specify a length table.integer('age', length=5)

table.unsigned_integer()

UNSIGNED INT equivalent column.

table.unsigned()

Alias for unsigned_integer

table.tiny_integer()

TINY INT equivalent column.

table.small_integer()

SMALL INT equivalent column.

table.medium_integer()

MEDIUM INT equivalent column.

table.big_integer()

BIG INT equivalent column.

table.increments()

The auto incrementing version of the table. An unsigned non nullable auto incrementing integer.

table.tiny_increments()

TINY auto incrementing equivalent column.

table.big_increments()

An unsigned non nullable auto incrementing big integer. Use this if you expect the rows in a table to be very large

table.binary()

BINARY equivalent column. Sometimes is text field on unsupported databases.

table.boolean()

BOOLEAN equivalent column.

table.json()

JSON equivalent column.

table.jsonb()

LONGBLOB equivalent column. JSONB equivalent column for Postgres.

table.date()

DATE equivalent column.

table.year()

YEAR equivalent column.

table.datetime()

DATETIME equivalent column.

table.timestamp()

TIMESTAMP equivalent column.

table.time()

TIME equivalent column.

table.timestamps()

Creates created_at and updated_at columns on the table with the timestamp column and defaults to the current time.

table.decimal()

DECIMAL equivalent column. Can also specify the length and decimal position. table.decimal('salary', 17, 6)

table.double()

DOUBLE equivalent column. Can also specify a float length table.double('salary', 17,6)

table.float()

FLOAT equivalent column.

table.enum()

ENUM equivalent column. You can also specify available options as a list. table.enum('flavor', ['chocolate', 'vanilla']). Sometimes defaults to a TEXT field with a constraint on unsupported databases.

table.geometry()

GEOMETRY equivalent column.

table.point()

POINT equivalent column.

table.uuid()

A CHAR column used to store UUIDs table.uuid('id'). Default length is 36.

table.soft_deletes()

table.table_comment("The users table")

Adds a comment to the table.

Changes & Rolling Back Migrations

In addition to building up the migration, you should also build onto the down method which should reverse whatever was done in the up method. If you create a table in the up method, you should drop the table in the down method.

Command
Description

table.drop_table()

DROP TABLE equivalent statement.

table.drop_table_if_exists()

DROP TABLE IF EXISTS equivalent statement.

table.drop_column()

DROP COLUMN equivalent statement. Can take one or multiple column names. drop_column('column1', 'column2')

table.drop_index()

Drops the constraint. Must pass in the name of the constraint. drop_index('email_index')

table.drop_unique()

Drops the uniqueness constraint. Must pass in the name of the constraint. table.drop_unique('users_email_unique')

table.drop_foreign()

Drops the foreign key. Must specify the index name. table.drop_foreign('users_article_id_foreign')

table.rename()

Renames a column to a new column. Must take the old column name, new column and data type. table.rename("user_id", "profile_id", "unsigned_integer")

table.drop_primary()

Drops the primary key constraint. Must pass in the constraint name table.drop_primary('users_id_primary')

Getting Migration Status

At any time you can get the migrations that have run or need to be ran:

$ masonite-orm migrate:status

Seeing Migration SQL Dumps

If you would like to see just the SQL that would run instead of running the actual migrations, you can specify the -s flag (short for --show). This works on the migrate and migrate:rollback commands.

python craft migrate -s

Refreshing Migrations

Refreshing a database is simply rolling back all migrations and then migrating again. This "refreshes" your database.

You can refresh by running the command:

$ masonite-orm migrate:refresh

You can also seed your database after refreshing your migrations. Which will rebuild you database to some desire state.

You can run all seeders located in Database Seeder class by:

$ masonite-orm migrate:refresh --seed

Or simply run a specific seeder:

$ masonite-orm migrate:refresh --seed CustomTable

CustomTable is the name of the seeder without "Seeder" suffix. Internally we will run the desired CustomTableSeeder.

Modifiers

In addition to the available columns you can use, you can also specify some modifers which will change the behavior of the column:

Command
Description

.nullable()

Allows NULL values to be inserted into the column.

.unique()

Forces all values in the column to be unique.

.after(other_column)

Adds the column after another column in the table. Can be used like table.string('is_admin').after('email').

.unsigned()

Makes the column unsigned. Used with the table.integer('age').unsigned() column.

.use_current()

Makes the column use the CURRENT_TIMESTAMP modifer.

.default(value)

Specify a default value for the column. Can be used like table.boolean("is_admin").default(False)

.primary()

Specify that the column should be used for the primary key constraint. Used like table.string('role_id').primary()

.comment()

Adds a comment to the column. Used like table.string('name').comment("A users name")

Indexes

In addition to columns, you can also create indexes. Below are the available indexes you can create:

Command
Description

table.primary(column)

Creates a primary table constraint. Can pass multiple columns to create a composite key like table.primary(['id', 'email']). Also supports a name parameter to specify the name of the index.

table.unique(column)

Makes a unique index. Can also pass multiple columns table.unique(['email', 'phone_number']). Also supports a name parameter to specify the name of the index.

table.index(column)

Creates an index on the column. table.index('email'). Also supports a name parameter to specify the name of the index.

table.fulltext(column)

Creates an fulltext index on the column or columns. table.fulltext('email'). Note this only works for MySQL databases and will be ignored on other databases. Also supports a name parameter to specify the name of the index.

Foreign Keys

If you want to create a foreign key you can do so simply as well:

table.foreign('local_column').references('other_column').on('other_table')

And optionally specify an on_delete or on_update method:

table.foreign('local_column').references('other_column').on('other_table').on_update('set null')

You can use these options:

Command
Description

.on_update('set null')

Sets the ON UPDATE SET NULL property on the constraint.

.on_update('cascade')

Sets the ON UPDATE CASCADE property on the constraint.

.on_delete('set null')

Sets the ON DELETE SET NULL property on the constraint.

.on_delete('cascade')

Sets the ON DELETE CASCADE property on the constraint.

Available options for on_update and on_delete are:

  • cascade

  • set null

  • restrict

  • no action

  • default

You can also pass a name parameter to change the name of the constraint:

table.foreign('local_column', name="foreign_constraint").references('other_column').on('other_table')

You may also use a shorthand method:

table.add_foreign('local_column.other_column.other_table', name="foreign_constraint")

Changing Columns

If you would like to change a column you should simply specify the new column and then specify a .change() method on it.

Here is an example of changing an email field to a nullable field:

class MigrationForUsersTable(Migration):
    def up(self):
        """
        Run the migrations.
        """
        with self.schema.table("users") as table:
            table.string('email').nullable().change()

        with self.schema.table("users") as table:
            table.string('email').unique()


    def down(self):
        """
        Revert the migrations.
        """
        pass

Truncating

You can truncate a table:

schema.truncate("users")

You can also temporarily disable foreign key checks and truncate a table:

schema.truncate("users", foreign_keys=False)

Dropping a Table

You can drop a table:

schema.drop_table("users")

Dropping a Table If It Exists

You can drop a table if it exists:

schema.drop_table_if_exists("users")

Seeding

Seeding is simply a way to quickly seed, or put data into your tables.

Creating Seeds

You can create a seed file and seed class which can be used for keeping seed information and running it later.

To create a seed run the command:

$ masonite-orm seed User

This will create some boiler plate for your seeds that look like this:

from masoniteorm.seeds import Seeder

class UserTableSeeder(Seeder):

    def run(self):
        """Run the database seeds."""
        pass

From here you can start building your seed.

Building Your Seed

A simple seed might be creating a specific user that you use during testing.

from masoniteorm.seeds import Seeder
from models import User

class UserTableSeeder(Seeder):

    def run(self):
        """Run the database seeds."""
        User.create({
            "username": "Joe",
            "email": "joe@masoniteproject.com",
            "password": "secret"
        })

Running Seeds

You can easily run your seeds:

$ masonite-orm seed:run User

Database Seeder

Factories

Factories are simple and easy ways to generate mass amounts of data quickly. You can put all your factories into a single file.

Creating A Factory Method

Factory methods are simple methods that take a single Faker instance.

# config/factories.py

def user_factory(self, faker):
    return {
        'name': faker.name(),
        'email': faker.email(),
        'password': 'secret'
    }

Registering Factories

Once created you can register the method with the Factory class:

# config/factories.py
from masoniteorm import Factory
from models import User

def user_factory(self, faker):
    return {
        'name': faker.name(),
        'email': faker.email(),
        'password': 'secret'
    }

Factory.register(User, user_factory)

Naming Factories

If you need to you can also name your factories so you can use different factories for different use cases:

# config/factories.py
from masoniteorm import Factory
from models import User

def user_factory(self, faker):
    return {
        'name': faker.name(),
        'email': faker.email(),
        'password': 'secret'
    }

def admin_user_factory(self, faker):
    return {
        'name': faker.name(),
        'email': faker.email(),
        'password': 'secret',
        'is_admin': 1
    }

Factory.register(User, user_factory)
Factory.register(User, admin_user_factory, name="admin_users")

Calling Factories

To use the factories you can import the Factory class from where you built your factories. In our case it was the config/factories.py file:

from config.factories import Factory
from models import User

users = Factory(User, 50).create() #== <masoniteorm.collections.Collection object>
user = Factory(User).create() #== <models.User object>

This will persist these users to the database. If you want to simply make the models or collection (and not persist them) then use the make method:

from config.factories import Factory
from models import User

users = Factory(User, 50).make() #== <masoniteorm.collections.Collection object>
user = Factory(User).make() #== <models.User object>

Again this will NOT persist values to the database.

Calling Named Factories

By default, Masonite will use the factory you created without a name. If you named the factories you can call those specific factories easily:

from config.factories import Factory
from models import User

users = Factory(User, 50).create(name="admin_users") #== <masoniteorm.collections.Collection object>

After Creating

You can also specify a second factory method that will run after a model is created. This would look like:

# config/factories.py
from masoniteorm import Factory
from models import User

def user_factory(self, faker):
    return {
        'name': faker.name(),
        'email': faker.email(),
        'password': 'secret'
    }

def after_users(self, model, faker):
    model.verified = True

Factory.register(User, user_factory)
Factory.after_creating(User, after_users)

Now when you create a user it will be passed to this after_creating method:

user = factory(User).create()
user.verified #== True

Modifying Factory Values

If you want to modify any values you previously set in the factory you created, you can pass a dictionary into the create or make method:

from config.factories import Factory
from models import User

users = Factory(User, 50).create({'email': 'john@masoniteproject.com'}) #== <masoniteorm.collections.Collection object>

This is a great way to make constant values when testing that you can later assert to.

A nullable DATETIME column named deleted_at. This is used by the scope.

The default primary key is often set to an auto-incrementing integer, but you can .

For methods available on the faker variable reference the documentation.

Faker
all
avg
chunk
collapse
contains
count
diff
each
every
filter
first
flatten
for_page
forget
get
group_by
implode
is_empty
last
map_into
map
max
merge
pluck
pop
prepend
pull
push
put
random
reduce
reject
reverse
serialize
shift
sort
sum
take
to_json
transform
unique
where
zip
use a UUID instead
SoftDeletes