This section details direct usage of the Engine
,
Connection
, and related objects. Its important to note that when
using the SQLAlchemy ORM, these objects are not generally accessed; instead,
the Session
object is used as the interface to the database.
However, for applications that are built around direct usage of textual SQL
statements and/or SQL expression constructs without involvement by the ORM’s
higher level management services, the Engine
and
Connection
are king (and queen?) - read on.
Recall from Engine Configuration that an Engine
is created via
the create_engine()
call:
engine = create_engine('mysql://scott:tiger@localhost/test')
The typical usage of create_engine()
is once per particular database
URL, held globally for the lifetime of a single application process. A single
Engine
manages many individual DBAPI connections on behalf of the
process and is intended to be called upon in a concurrent fashion. The
Engine
is not synonymous to the DBAPI connect
function,
which represents just one connection resource - the Engine
is most
efficient when created just once at the module level of an application, not
per-object or per-function call.
For a multiple-process application that uses the os.fork
system call, or
for example the Python multiprocessing
module, it’s usually required that a
separate Engine
be used for each child process. This is because the
Engine
maintains a reference to a connection pool that ultimately
references DBAPI connections - these tend to not be portable across process
boundaries. An Engine
that is configured not to use pooling (which
is achieved via the usage of NullPool
) does not have this
requirement.
The engine can be used directly to issue SQL to the database. The most generic
way is first procure a connection resource, which you get via the
Engine.connect()
method:
connection = engine.connect()
result = connection.execute("select username from users")
for row in result:
print("username:", row['username'])
connection.close()
The connection is an instance of Connection
,
which is a proxy object for an actual DBAPI connection. The DBAPI
connection is retrieved from the connection pool at the point at which
Connection
is created.
The returned result is an instance of ResultProxy
, which
references a DBAPI cursor and provides a largely compatible interface
with that of the DBAPI cursor. The DBAPI cursor will be closed
by the ResultProxy
when all of its result rows (if any) are
exhausted. A ResultProxy
that returns no rows, such as that of
an UPDATE statement (without any returned rows),
releases cursor resources immediately upon construction.
When the close()
method is called, the referenced DBAPI
connection is released to the connection pool. From the perspective
of the database itself, nothing is actually “closed”, assuming pooling is
in use. The pooling mechanism issues a rollback()
call on the DBAPI
connection so that any transactional state or locks are removed, and
the connection is ready for its next usage.
The above procedure can be performed in a shorthand way by using the
execute()
method of Engine
itself:
result = engine.execute("select username from users")
for row in result:
print("username:", row['username'])
Where above, the execute()
method acquires a new
Connection
on its own, executes the statement with that object,
and returns the ResultProxy
. In this case, the ResultProxy
contains a special flag known as close_with_result
, which indicates
that when its underlying DBAPI cursor is closed, the Connection
object itself is also closed, which again returns the DBAPI connection
to the connection pool, releasing transactional resources.
If the ResultProxy
potentially has rows remaining, it can be
instructed to close out its resources explicitly:
result.close()
If the ResultProxy
has pending rows remaining and is dereferenced by
the application without being closed, Python garbage collection will
ultimately close out the cursor as well as trigger a return of the pooled
DBAPI connection resource to the pool (SQLAlchemy achieves this by the usage
of weakref callbacks - never the __del__
method) - however it’s never a
good idea to rely upon Python garbage collection to manage resources.
Our example above illustrated the execution of a textual SQL string.
The execute()
method can of course accommodate more than
that, including the variety of SQL expression constructs described
in SQL Expression Language Tutorial.
Note
This section describes how to use transactions when working directly
with Engine
and Connection
objects. When using the
SQLAlchemy ORM, the public API for transaction control is via the
Session
object, which makes usage of the Transaction
object internally. See Managing Transactions for further
information.
The Connection
object provides a begin()
method which returns a Transaction
object.
This object is usually used within a try/except clause so that it is
guaranteed to invoke Transaction.rollback()
or Transaction.commit()
:
connection = engine.connect()
trans = connection.begin()
try:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2='this is some data')
trans.commit()
except:
trans.rollback()
raise
The above block can be created more succinctly using context
managers, either given an Engine
:
# runs a transaction
with engine.begin() as connection:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2='this is some data')
Or from the Connection
, in which case the Transaction
object
is available as well:
with connection.begin() as trans:
r1 = connection.execute(table1.select())
connection.execute(table1.insert(), col1=7, col2='this is some data')
The Transaction
object also handles “nested”
behavior by keeping track of the outermost begin/commit pair. In this example,
two functions both issue a transaction on a Connection
, but only the outermost
Transaction
object actually takes effect when it is committed.
# method_a starts a transaction and calls method_b
def method_a(connection):
trans = connection.begin() # open a transaction
try:
method_b(connection)
trans.commit() # transaction is committed here
except:
trans.rollback() # this rolls back the transaction unconditionally
raise
# method_b also starts a transaction
def method_b(connection):
trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
try:
connection.execute("insert into mytable values ('bat', 'lala')")
connection.execute(mytable.insert(), col1='bat', col2='lala')
trans.commit() # transaction is not committed yet
except:
trans.rollback() # this rolls back the transaction unconditionally
raise
# open a Connection and call method_a
conn = engine.connect()
method_a(conn)
conn.close()
Above, method_a
is called first, which calls connection.begin()
. Then
it calls method_b
. When method_b
calls connection.begin()
, it just
increments a counter that is decremented when it calls commit()
. If either
method_a
or method_b
calls rollback()
, the whole transaction is
rolled back. The transaction is not committed until method_a
calls the
commit()
method. This “nesting” behavior allows the creation of functions
which “guarantee” that a transaction will be used if one was not already
available, but will automatically participate in an enclosing transaction if
one exists.
The previous transaction example illustrates how to use Transaction
so that several executions can take part in the same transaction. What happens
when we issue an INSERT, UPDATE or DELETE call without using
Transaction
? While some DBAPI
implementations provide various special “non-transactional” modes, the core
behavior of DBAPI per PEP-0249 is that a transaction is always in progress,
providing only rollback()
and commit()
methods but no begin()
.
SQLAlchemy assumes this is the case for any given DBAPI.
Given this requirement, SQLAlchemy implements its own “autocommit” feature which
works completely consistently across all backends. This is achieved by
detecting statements which represent data-changing operations, i.e. INSERT,
UPDATE, DELETE, as well as data definition language (DDL) statements such as
CREATE TABLE, ALTER TABLE, and then issuing a COMMIT automatically if no
transaction is in progress. The detection is based on the presence of the
autocommit=True
execution option on the statement. If the statement
is a text-only statement and the flag is not set, a regular expression is used
to detect INSERT, UPDATE, DELETE, as well as a variety of other commands
for a particular backend:
conn = engine.connect()
conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits
The “autocommit” feature is only in effect when no Transaction
has
otherwise been declared. This means the feature is not generally used with
the ORM, as the Session
object by default always maintains an
ongoing Transaction
.
Full control of the “autocommit” behavior is available using the generative
Connection.execution_options()
method provided on Connection
,
Engine
, Executable
, using the “autocommit” flag which will
turn on or off the autocommit for the selected scope. For example, a
text()
construct representing a stored procedure that commits might use
it so that a SELECT statement will issue a COMMIT:
engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))
Recall from the first section we mentioned executing with and without explicit
usage of Connection
. “Connectionless” execution
refers to the usage of the execute()
method on an object which is not a
Connection
. This was illustrated using the execute()
method
of Engine
:
result = engine.execute("select username from users")
for row in result:
print("username:", row['username'])
In addition to “connectionless” execution, it is also possible
to use the execute()
method of
any Executable
construct, which is a marker for SQL expression objects
that support execution. The SQL expression object itself references an
Engine
or Connection
known as the bind, which it uses
in order to provide so-called “implicit” execution services.
Given a table as below:
from sqlalchemy import MetaData, Table, Column, Integer
meta = MetaData()
users_table = Table('users', meta,
Column('id', Integer, primary_key=True),
Column('name', String(50))
)
Explicit execution delivers the SQL text or constructed SQL expression to the
execute()
method of Connection
:
engine = create_engine('sqlite:///file.db')
connection = engine.connect()
result = connection.execute(users_table.select())
for row in result:
# ....
connection.close()
Explicit, connectionless execution delivers the expression to the
execute()
method of Engine
:
engine = create_engine('sqlite:///file.db')
result = engine.execute(users_table.select())
for row in result:
# ....
result.close()
Implicit execution is also connectionless, and makes usage of the execute()
method
on the expression itself. This method is provided as part of the
Executable
class, which refers to a SQL statement that is sufficient
for being invoked against the database. The method makes usage of
the assumption that either an
Engine
or
Connection
has been bound to the expression
object. By “bound” we mean that the special attribute MetaData.bind
has been used to associate a series of
Table
objects and all SQL constructs derived from them with a specific
engine:
engine = create_engine('sqlite:///file.db')
meta.bind = engine
result = users_table.select().execute()
for row in result:
# ....
result.close()
Above, we associate an Engine
with a MetaData
object using
the special attribute MetaData.bind
. The select()
construct produced
from the Table
object has a method execute()
, which will
search for an Engine
that’s “bound” to the Table
.
Overall, the usage of “bound metadata” has three general effects:
Executable.execute()
method which automatically
locates a “bind” with which to execute themselves.Session
object supports using “bound metadata” in order
to establish which Engine
should be used to invoke SQL statements
on behalf of a particular mapped class, though the Session
also features its own explicit system of establishing complex Engine
/
mapped class configurations.MetaData.create_all()
, MetaData.drop_all()
, Table.create()
,
Table.drop()
, and “autoload” features all make usage of the bound
Engine
automatically without the need to pass it explicitly.Note
The concepts of “bound metadata” and “implicit execution” are not emphasized in modern SQLAlchemy. While they offer some convenience, they are no longer required by any API and are never necessary.
In applications where multiple Engine
objects are present, each one logically associated
with a certain set of tables (i.e. vertical sharding), the “bound metadata” technique can be used
so that individual Table
can refer to the appropriate Engine
automatically;
in particular this is supported within the ORM via the Session
object
as a means to associate Table
objects with an appropriate Engine
,
as an alternative to using the bind arguments accepted directly by the Session
.
However, the “implicit execution” technique is not at all appropriate for use with the
ORM, as it bypasses the transactional context maintained by the Session
.
Overall, in the vast majority of cases, “bound metadata” and “implicit execution” are not useful. While “bound metadata” has a marginal level of usefulness with regards to ORM configuration, “implicit execution” is a very old usage pattern that in most cases is more confusing than it is helpful, and its usage is discouraged. Both patterns seem to encourage the overuse of expedient “short cuts” in application design which lead to problems later on.
Modern SQLAlchemy usage, especially the ORM, places a heavy stress on working within the context
of a transaction at all times; the “implicit execution” concept makes the job of
associating statement execution with a particular transaction much more difficult.
The Executable.execute()
method on a particular SQL statement
usually implies that the execution is not part of any particular transaction, which is
usually not the desired effect.
In both “connectionless” examples, the
Connection
is created behind the scenes; the
ResultProxy
returned by the execute()
call references the Connection
used to issue
the SQL statement. When the ResultProxy
is closed, the underlying
Connection
is closed for us, resulting in the
DBAPI connection being returned to the pool with transactional resources removed.
To support multi-tenancy applications that distribute common sets of tables
into multiple schemas, the
Connection.execution_options.schema_translate_map
execution option may be used to repurpose a set of Table
objects
to render under different schema names without any changes.
Given a table:
user_table = Table(
'user', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(50))
)
The “schema” of this Table
as defined by the
Table.schema
attribute is None
. The
Connection.execution_options.schema_translate_map
can specify
that all Table
objects with a schema of None
would instead
render the schema as user_schema_one
:
connection = engine.connect().execution_options(
schema_translate_map={None: "user_schema_one"})
result = connection.execute(user_table.select())
The above code will invoke SQL on the database of the form:
SELECT user_schema_one.user.id, user_schema_one.user.name FROM
user_schema_one.user
That is, the schema name is substituted with our translated name. The map can specify any number of target->destination schemas:
connection = engine.connect().execution_options(
schema_translate_map={
None: "user_schema_one", # no schema name -> "user_schema_one"
"special": "special_schema", # schema="special" becomes "special_schema"
"public": None # Table objects with schema="public" will render with no schema
})
The Connection.execution_options.schema_translate_map
parameter
affects all DDL and SQL constructs generated from the SQL expression language,
as derived from the Table
or Sequence
objects.
It does not impact literal string SQL used via the expression.text()
construct nor via plain strings passed to Connection.execute()
.
The feature takes effect only in those cases where the name of the
schema is derived directly from that of a Table
or Sequence
;
it does not impact methods where a string schema name is passed directly.
By this pattern, it takes effect within the “can create” / “can drop” checks
performed by methods such as MetaData.create_all()
or
MetaData.drop_all()
are called, and it takes effect when
using table reflection given a Table
object. However it does
not affect the operations present on the Inspector
object,
as the schema name is passed to these methods explicitly.
New in version 1.1.
The Engine
refers to a connection pool, which means under normal
circumstances, there are open database connections present while the
Engine
object is still resident in memory. When an Engine
is garbage collected, its connection pool is no longer referred to by
that Engine
, and assuming none of its connections are still checked
out, the pool and its connections will also be garbage collected, which has the
effect of closing out the actual database connections as well. But otherwise,
the Engine
will hold onto open database connections assuming
it uses the normally default pool implementation of QueuePool
.
The Engine
is intended to normally be a permanent
fixture established up-front and maintained throughout the lifespan of an
application. It is not intended to be created and disposed on a
per-connection basis; it is instead a registry that maintains both a pool
of connections as well as configurational information about the database
and DBAPI in use, as well as some degree of internal caching of per-database
resources.
However, there are many cases where it is desirable that all connection resources
referred to by the Engine
be completely closed out. It’s
generally not a good idea to rely on Python garbage collection for this
to occur for these cases; instead, the Engine
can be explicitly disposed using
the Engine.dispose()
method. This disposes of the engine’s
underlying connection pool and replaces it with a new one that’s empty.
Provided that the Engine
is discarded at this point and no longer used, all checked-in connections
which it refers to will also be fully closed.
Valid use cases for calling Engine.dispose()
include:
fork()
, and an
Engine
object is copied to the child process,
Engine.dispose()
should be called so that the engine creates
brand new database connections local to that fork. Database connections
generally do not travel across process boundaries.Engine
objects may be created and disposed.Connections that are checked out are not discarded when the
engine is disposed or garbage collected, as these connections are still
strongly referenced elsewhere by the application.
However, after Engine.dispose()
is called, those
connections are no longer associated with that Engine
; when they
are closed, they will be returned to their now-orphaned connection pool
which will ultimately be garbage collected, once all connections which refer
to it are also no longer referenced anywhere.
Since this process is not easy to control, it is strongly recommended that
Engine.dispose()
is called only after all checked out connections
are checked in or otherwise de-associated from their pool.
An alternative for applications that are negatively impacted by the
Engine
object’s use of connection pooling is to disable pooling
entirely. This typically incurs only a modest performance impact upon the
use of new connections, and means that when a connection is checked in,
it is entirely closed out and is not held in memory. See Switching Pool Implementations
for guidelines on how to disable pooling.
The “threadlocal” engine strategy is an optional feature which
can be used by non-ORM applications to associate transactions
with the current thread, such that all parts of the
application can participate in that transaction implicitly without the need to
explicitly reference a Connection
.
Deprecated since version 1.3: The “threadlocal” engine strategy is deprecated, and will be removed in a future release.
This strategy is designed for a particular pattern of usage which is
generally considered as a legacy pattern. It has no impact on the
“thread safety” of SQLAlchemy components or one’s application. It also
should not be used when using an ORM
Session
object, as the
Session
itself represents an ongoing
transaction and itself handles the job of maintaining connection and
transactional resources.
Enabling threadlocal
is achieved as follows:
db = create_engine('mysql://localhost/test', strategy='threadlocal')
The above Engine
will now acquire a Connection
using
connection resources derived from a thread-local variable whenever
Engine.execute()
or Engine.contextual_connect()
is called. This
connection resource is maintained as long as it is referenced, which allows
multiple points of an application to share a transaction while using
connectionless execution:
def call_operation1():
engine.execute("insert into users values (?, ?)", 1, "john")
def call_operation2():
users.update(users.c.user_id==5).execute(name='ed')
db.begin()
try:
call_operation1()
call_operation2()
db.commit()
except:
db.rollback()
Explicit execution can be mixed with connectionless execution by
using the Engine.connect()
method to acquire a Connection
that is not part of the threadlocal scope:
db.begin()
conn = db.connect()
try:
conn.execute(log_table.insert(), message="Operation started")
call_operation1()
call_operation2()
db.commit()
conn.execute(log_table.insert(), message="Operation succeeded")
except:
db.rollback()
conn.execute(log_table.insert(), message="Operation failed")
finally:
conn.close()
To access the Connection
that is bound to the threadlocal scope,
call Engine.contextual_connect()
:
conn = db.contextual_connect()
call_operation3(conn)
conn.close()
Calling close()
on the “contextual” connection does not release
its resources until all other usages of that resource are closed as well, including
that any ongoing transactions are rolled back or committed.
There are some cases where SQLAlchemy does not provide a genericized way at accessing some DBAPI functions, such as calling stored procedures as well as dealing with multiple result sets. In these cases, it’s just as expedient to deal with the raw DBAPI connection directly.
The most common way to access the raw DBAPI connection is to get it
from an already present Connection
object directly. It is
present using the Connection.connection
attribute:
connection = engine.connect()
dbapi_conn = connection.connection
The DBAPI connection here is actually a “proxied” in terms of the
originating connection pool, however this is an implementation detail
that in most cases can be ignored. As this DBAPI connection is still
contained within the scope of an owning Connection
object, it is
best to make use of the Connection
object for most features such
as transaction control as well as calling the Connection.close()
method; if these operations are performed on the DBAPI connection directly,
the owning Connection
will not be aware of these changes in state.
To overcome the limitations imposed by the DBAPI connection that is
maintained by an owning Connection
, a DBAPI connection is also
available without the need to procure a
Connection
first, using the Engine.raw_connection()
method
of Engine
:
dbapi_conn = engine.raw_connection()
This DBAPI connection is again a “proxied” form as was the case before.
The purpose of this proxying is now apparent, as when we call the .close()
method of this connection, the DBAPI connection is typically not actually
closed, but instead released back to the
engine’s connection pool:
dbapi_conn.close()
While SQLAlchemy may in the future add built-in patterns for more DBAPI use cases, there are diminishing returns as these cases tend to be rarely needed and they also vary highly dependent on the type of DBAPI in use, so in any case the direct DBAPI calling pattern is always there for those cases where it is needed.
Some recipes for DBAPI connection use follow.
For stored procedures with special syntactical or parameter concerns, DBAPI-level callproc may be used:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.callproc("my_procedure", ['x', 'y', 'z'])
results = list(cursor.fetchall())
cursor.close()
connection.commit()
finally:
connection.close()
Multiple result set support is available from a raw DBAPI cursor using the nextset method:
connection = engine.raw_connection()
try:
cursor = connection.cursor()
cursor.execute("select * from table1; select * from table2")
results_one = cursor.fetchall()
cursor.nextset()
results_two = cursor.fetchall()
cursor.close()
finally:
connection.close()
The create_engine()
function call locates the given dialect
using setuptools entrypoints. These entry points can be established
for third party dialects within the setup.py script. For example,
to create a new dialect “foodialect://”, the steps are as follows:
Create a package called foodialect
.
The package should have a module containing the dialect class,
which is typically a subclass of sqlalchemy.engine.default.DefaultDialect
.
In this example let’s say it’s called FooDialect
and its module is accessed
via foodialect.dialect
.
The entry point can be established in setup.py as follows:
entry_points="""
[sqlalchemy.dialects]
foodialect = foodialect.dialect:FooDialect
"""
If the dialect is providing support for a particular DBAPI on top of
an existing SQLAlchemy-supported database, the name can be given
including a database-qualification. For example, if FooDialect
were in fact a MySQL dialect, the entry point could be established like this:
entry_points="""
[sqlalchemy.dialects]
mysql.foodialect = foodialect.dialect:FooDialect
"""
The above entrypoint would then be accessed as create_engine("mysql+foodialect://")
.
SQLAlchemy also allows a dialect to be registered within the current process, bypassing
the need for separate installation. Use the register()
function as follows:
from sqlalchemy.dialects import registry
registry.register("mysql.foodialect", "myapp.dialect", "MyMySQLDialect")
The above will respond to create_engine("mysql+foodialect://")
and load the
MyMySQLDialect
class from the myapp.dialect
module.
sqlalchemy.engine.
Connection
(engine, connection=None, close_with_result=False, _branch_from=None, _execution_options=None, _dispatch=None, _has_events=None)¶Bases: sqlalchemy.engine.Connectable
Provides high-level functionality for a wrapped DB-API connection.
Provides execution support for string-based SQL statements as well as
ClauseElement
, Compiled
and DefaultGenerator
objects. Provides a begin()
method to return Transaction
objects.
The Connection object is not thread-safe. While a Connection can be shared among threads using properly synchronized access, it is still possible that the underlying DBAPI connection may not support shared access between threads. Check the DBAPI documentation for details.
The Connection object represents a single dbapi connection checked out
from the connection pool. In this state, the connection pool has no affect
upon the connection, including its expiration or timeout state. For the
connection pool to properly manage connections, connections should be
returned to the connection pool (i.e. connection.close()
) whenever the
connection is not in use.
__init__
(engine, connection=None, close_with_result=False, _branch_from=None, _execution_options=None, _dispatch=None, _has_events=None)¶Construct a new Connection.
The constructor here is not public and is only called only by an
Engine
. See Engine.connect()
and
Engine.contextual_connect()
methods.
begin
()¶Begin a transaction and return a transaction handle.
The returned object is an instance of Transaction
.
This object represents the “scope” of the transaction,
which completes when either the Transaction.rollback()
or Transaction.commit()
method is called.
Nested calls to begin()
on the same Connection
will return new Transaction
objects that represent
an emulated transaction within the scope of the enclosing
transaction, that is:
trans = conn.begin() # outermost transaction
trans2 = conn.begin() # "nested"
trans2.commit() # does nothing
trans.commit() # actually commits
Calls to Transaction.commit()
only have an effect
when invoked via the outermost Transaction
object, though the
Transaction.rollback()
method of any of the
Transaction
objects will roll back the
transaction.
See also
Connection.begin_nested()
- use a SAVEPOINT
Connection.begin_twophase()
-
use a two phase /XID transaction
Engine.begin()
- context manager available from
Engine
begin_nested
()¶Begin a nested transaction and return a transaction handle.
The returned object is an instance of NestedTransaction
.
Nested transactions require SAVEPOINT support in the
underlying database. Any transaction in the hierarchy may
commit
and rollback
, however the outermost transaction
still controls the overall commit
or rollback
of the
transaction of a whole.
begin_twophase
(xid=None)¶Begin a two-phase or XA transaction and return a transaction handle.
The returned object is an instance of TwoPhaseTransaction
,
which in addition to the methods provided by
Transaction
, also provides a
prepare()
method.
Parameters: | xid¶ – the two phase transaction id. If not supplied, a random id will be generated. |
---|
close
()¶Close this Connection
.
This results in a release of the underlying database
resources, that is, the DBAPI connection referenced
internally. The DBAPI connection is typically restored
back to the connection-holding Pool
referenced
by the Engine
that produced this
Connection
. Any transactional state present on
the DBAPI connection is also unconditionally released via
the DBAPI connection’s rollback()
method, regardless
of any Transaction
object that may be
outstanding with regards to this Connection
.
After close()
is called, the
Connection
is permanently in a closed state,
and will allow no further operations.
closed
¶Return True if this connection is closed.
connect
()¶Returns a branched version of this Connection
.
The Connection.close()
method on the returned
Connection
can be called and this
Connection
will remain open.
This method provides usage symmetry with
Engine.connect()
, including for usage
with context managers.
connection
¶The underlying DB-API connection managed by this Connection.
See also
default_isolation_level
¶The default isolation level assigned to this Connection
.
This is the isolation level setting that the Connection
has when first procured via the Engine.connect()
method.
This level stays in place until the
Connection.execution_options.isolation_level
is used
to change the setting on a per-Connection
basis.
Unlike Connection.get_isolation_level()
, this attribute is set
ahead of time from the first connection procured by the dialect,
so SQL query is not invoked when this accessor is called.
New in version 0.9.9.
See also
Connection.get_isolation_level()
- view current level
create_engine.isolation_level
- set per Engine
isolation level
Connection.execution_options.isolation_level
- set per Connection
isolation level
detach
()¶Detach the underlying DB-API connection from its connection pool.
E.g.:
with engine.connect() as conn:
conn.detach()
conn.execute("SET search_path TO schema1, schema2")
# work with connection
# connection is fully closed (since we used "with:", can
# also call .close())
This Connection
instance will remain usable. When closed
(or exited from a context manager context as above),
the DB-API connection will be literally closed and not
returned to its originating pool.
This method can be used to insulate the rest of an application from a modified state on a connection (such as a transaction isolation level or similar).
execute
(object_, *multiparams, **params)¶Executes a SQL statement construct and returns a
ResultProxy
.
Parameters: |
|
---|
execution_options
(**opt)¶Set non-SQL options for the connection which take effect during execution.
The method returns a copy of this Connection
which references
the same underlying DBAPI connection, but also defines the given
execution options which will take effect for a call to
execute()
. As the new Connection
references the same
underlying resource, it’s usually a good idea to ensure that the copies
will be discarded immediately, which is implicit if used as in:
result = connection.execution_options(stream_results=True).\
execute(stmt)
Note that any key/value can be passed to
Connection.execution_options()
, and it will be stored in the
_execution_options
dictionary of the Connection
. It
is suitable for usage by end-user schemes to communicate with
event listeners, for example.
The keywords that are currently recognized by SQLAlchemy itself
include all those listed under Executable.execution_options()
,
as well as others that are specific to Connection
.
Parameters: |
|
---|
get_execution_options
()¶Get the non-SQL options which will take effect during execution.
New in version 1.3.
See also
get_isolation_level
()¶Return the current isolation level assigned to this
Connection
.
This will typically be the default isolation level as determined
by the dialect, unless if the
Connection.execution_options.isolation_level
feature has been used to alter the isolation level on a
per-Connection
basis.
This attribute will typically perform a live SQL operation in order
to procure the current isolation level, so the value returned is the
actual level on the underlying DBAPI connection regardless of how
this state was set. Compare to the
Connection.default_isolation_level
accessor
which returns the dialect-level setting without performing a SQL
query.
New in version 0.9.9.
See also
Connection.default_isolation_level
- view default level
create_engine.isolation_level
- set per Engine
isolation level
Connection.execution_options.isolation_level
- set per Connection
isolation level
in_transaction
()¶Return True if a transaction is in progress.
info
¶Info dictionary associated with the underlying DBAPI connection
referred to by this Connection
, allowing user-defined
data to be associated with the connection.
The data here will follow along with the DBAPI connection including
after it is returned to the connection pool and used again
in subsequent instances of Connection
.
invalidate
(exception=None)¶Invalidate the underlying DBAPI connection associated with
this Connection
.
The underlying DBAPI connection is literally closed (if possible), and is discarded. Its source connection pool will typically lazily create a new connection to replace it.
Upon the next use (where “use” typically means using the
Connection.execute()
method or similar),
this Connection
will attempt to
procure a new DBAPI connection using the services of the
Pool
as a source of connectivity (e.g. a “reconnection”).
If a transaction was in progress (e.g. the
Connection.begin()
method has been called) when
Connection.invalidate()
method is called, at the DBAPI
level all state associated with this transaction is lost, as
the DBAPI connection is closed. The Connection
will not allow a reconnection to proceed until the
Transaction
object is ended, by calling the
Transaction.rollback()
method; until that point, any attempt at
continuing to use the Connection
will raise an
InvalidRequestError
.
This is to prevent applications from accidentally
continuing an ongoing transactional operations despite the
fact that the transaction has been lost due to an
invalidation.
The Connection.invalidate()
method, just like auto-invalidation,
will at the connection pool level invoke the
PoolEvents.invalidate()
event.
See also
invalidated
¶Return True if this connection was invalidated.
run_callable
(callable_, *args, **kwargs)¶Given a callable object or function, execute it, passing
a Connection
as the first argument.
The given *args and **kwargs are passed subsequent
to the Connection
argument.
This function, along with Engine.run_callable()
,
allows a function to be run with a Connection
or Engine
object without the need to know
which one is being dealt with.
scalar
(object_, *multiparams, **params)¶Executes and returns the first column of the first row.
The underlying result/cursor is closed after execution.
schema_for_object
= <sqlalchemy.sql.schema._SchemaTranslateMap object>¶Return the “.schema” attribute for an object.
Used for Table
, Sequence
and similar objects,
and takes into account
the Connection.execution_options.schema_translate_map
parameter.
New in version 1.1.
See also
transaction
(callable_, *args, **kwargs)¶Execute the given function within a transaction boundary.
The function is passed this Connection
as the first argument, followed by the given *args and **kwargs,
e.g.:
def do_something(conn, x, y):
conn.execute("some statement", {'x':x, 'y':y})
conn.transaction(do_something, 5, 10)
The operations inside the function are all invoked within the
context of a single Transaction
.
Upon success, the transaction is committed. If an
exception is raised, the transaction is rolled back
before propagating the exception.
Note
The transaction()
method is superseded by
the usage of the Python with:
statement, which can
be used with Connection.begin()
:
with conn.begin():
conn.execute("some statement", {'x':5, 'y':10})
As well as with Engine.begin()
:
with engine.begin() as conn:
conn.execute("some statement", {'x':5, 'y':10})
See also
Engine.begin()
- engine-level transactional
context
Engine.transaction()
- engine-level version of
Connection.transaction()
sqlalchemy.engine.
Connectable
¶Interface for an object which supports execution of SQL constructs.
The two implementations of Connectable
are
Connection
and Engine
.
Connectable must also implement the ‘dialect’ member which references a
Dialect
instance.
connect
(**kwargs)¶Return a Connection
object.
Depending on context, this may be self
if this object
is already an instance of Connection
, or a newly
procured Connection
if this object is an instance
of Engine
.
contextual_connect
(*arg, **kw)¶Return a Connection
object which may be part of an ongoing
context.
Deprecated since version 1.3: The Engine.contextual_connect()
and Connection.contextual_connect()
methods are deprecated. This method is an artifact of the threadlocal engine strategy which is also to be deprecated. For explicit connections from an Engine
, use the Engine.connect()
method.
Depending on context, this may be self
if this object
is already an instance of Connection
, or a newly
procured Connection
if this object is an instance
of Engine
.
create
(entity, **kwargs)¶Emit CREATE statements for the given schema entity.
Deprecated since version 0.7: The Connectable.create()
method is deprecated and will be removed in a future release. Please use the .create()
method on specific schema objects to emit DDL sequences, including Table.create()
, Index.create()
, and MetaData.create_all()
.
drop
(entity, **kwargs)¶Emit DROP statements for the given schema entity.
Deprecated since version 0.7: The Connectable.drop()
method is deprecated and will be removed in a future release. Please use the .drop()
method on specific schema objects to emit DDL sequences, including Table.drop()
, Index.drop()
, and MetaData.drop_all()
.
engine
= None¶The Engine
instance referred to by this Connectable
.
May be self
if this is already an Engine
.
execute
(object_, *multiparams, **params)¶Executes the given construct and returns a ResultProxy
.
scalar
(object_, *multiparams, **params)¶Executes and returns the first column of the first row.
The underlying cursor is closed after execution.
sqlalchemy.engine.
CreateEnginePlugin
(url, kwargs)¶A set of hooks intended to augment the construction of an
Engine
object based on entrypoint names in a URL.
The purpose of CreateEnginePlugin
is to allow third-party
systems to apply engine, pool and dialect level event listeners without
the need for the target application to be modified; instead, the plugin
names can be added to the database URL. Target applications for
CreateEnginePlugin
include:
Plugins are registered using entry points in a similar way as that of dialects:
entry_points={
'sqlalchemy.plugins': [
'myplugin = myapp.plugins:MyPlugin'
]
A plugin that uses the above names would be invoked from a database URL as in:
from sqlalchemy import create_engine
engine = create_engine(
"mysql+pymysql://scott:tiger@localhost/test?plugin=myplugin")
Alternatively, the create_engine.plugins" argument may be
passed as a list to :func:
.create_engine`:
engine = create_engine(
"mysql+pymysql://scott:tiger@localhost/test",
plugins=["myplugin"])
New in version 1.2.3: plugin names can also be specified
to create_engine()
as a list
The plugin
argument supports multiple instances, so that a URL
may specify multiple plugins; they are loaded in the order stated
in the URL:
engine = create_engine(
"mysql+pymysql://scott:tiger@localhost/"
"test?plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three")
A plugin can receive additional arguments from the URL string as
well as from the keyword arguments passed to create_engine()
.
The URL
object and the keyword dictionary are passed to the
constructor so that these arguments can be extracted from the url’s
URL.query
collection as well as from the dictionary:
class MyPlugin(CreateEnginePlugin):
def __init__(self, url, kwargs):
self.my_argument_one = url.query.pop('my_argument_one')
self.my_argument_two = url.query.pop('my_argument_two')
self.my_argument_three = kwargs.pop('my_argument_three', None)
Arguments like those illustrated above would be consumed from the following:
from sqlalchemy import create_engine
engine = create_engine(
"mysql+pymysql://scott:tiger@localhost/"
"test?plugin=myplugin&my_argument_one=foo&my_argument_two=bar",
my_argument_three='bat')
The URL and dictionary are used for subsequent setup of the engine as they are, so the plugin can modify their arguments in-place. Arguments that are only understood by the plugin should be popped or otherwise removed so that they aren’t interpreted as erroneous arguments afterwards.
When the engine creation process completes and produces the
Engine
object, it is again passed to the plugin via the
CreateEnginePlugin.engine_created()
hook. In this hook, additional
changes can be made to the engine, most typically involving setup of
events (e.g. those defined in Core Events).
New in version 1.1.
__init__
(url, kwargs)¶Construct a new CreateEnginePlugin
.
The plugin object is instantiated individually for each call
to create_engine()
. A single Engine
will be
passed to the CreateEnginePlugin.engine_created()
method
corresponding to this URL.
Parameters: |
|
---|
engine_created
(engine)¶Receive the Engine
object when it is fully constructed.
The plugin may make additional changes to the engine, such as registering engine or connection pool events.
handle_dialect_kwargs
(dialect_cls, dialect_args)¶parse and modify dialect kwargs
handle_pool_kwargs
(pool_cls, pool_args)¶parse and modify pool kwargs
sqlalchemy.engine.
Engine
(pool, dialect, url, logging_name=None, echo=None, proxy=None, execution_options=None, hide_parameters=False)¶Bases: sqlalchemy.engine.Connectable
, sqlalchemy.log.Identified
Connects a Pool
and
Dialect
together to provide a
source of database connectivity and behavior.
An Engine
object is instantiated publicly using the
create_engine()
function.
begin
(close_with_result=False)¶Return a context manager delivering a Connection
with a Transaction
established.
E.g.:
with engine.begin() as conn:
conn.execute("insert into table (x, y, z) values (1, 2, 3)")
conn.execute("my_special_procedure(5)")
Upon successful operation, the Transaction
is committed. If an error is raised, the Transaction
is rolled back.
The close_with_result
flag is normally False
, and indicates
that the Connection
will be closed when the operation
is complete. When set to True
, it indicates the
Connection
is in “single use” mode, where the
ResultProxy
returned by the first call to
Connection.execute()
will close the Connection
when
that ResultProxy
has exhausted all result rows.
See also
Engine.connect()
- procure a Connection
from
an Engine
.
Connection.begin()
- start a Transaction
for a particular Connection
.
connect
(**kwargs)¶Return a new Connection
object.
The Connection
object is a facade that uses a DBAPI
connection internally in order to communicate with the database. This
connection is procured from the connection-holding Pool
referenced by this Engine
. When the
close()
method of the Connection
object
is called, the underlying DBAPI connection is then returned to the
connection pool, where it may be used again in a subsequent call to
connect()
.
contextual_connect
(close_with_result=False, **kwargs)¶Return a Connection
object which may be part of some
ongoing context.
Deprecated since version 1.3: The Engine.contextual_connect()
method is deprecated. This method is an artifact of the threadlocal engine strategy which is also to be deprecated. For explicit connections from an Engine
, use the Engine.connect()
method.
By default, this method does the same thing as Engine.connect()
.
Subclasses of Engine
may override this method
to provide contextual behavior.
Parameters: | close_with_result¶ – When True, the first ResultProxy
created by the Connection will call the
Connection.close() method of that connection as soon as any
pending result rows are exhausted. This is used to supply the
“connectionless execution” behavior provided by the
Engine.execute() method. |
---|
dispose
()¶Dispose of the connection pool used by this Engine
.
This has the effect of fully closing all currently checked in
database connections. Connections that are still checked out
will not be closed, however they will no longer be associated
with this Engine
, so when they are closed individually,
eventually the Pool
which they are associated with will
be garbage collected and they will be closed out fully, if
not already closed on checkin.
A new connection pool is created immediately after the old one has
been disposed. This new pool, like all SQLAlchemy connection pools,
does not make any actual connections to the database until one is
first requested, so as long as the Engine
isn’t used again,
no new connections will be made.
See also
execute
(statement, *multiparams, **params)¶Executes the given construct and returns a ResultProxy
.
The arguments are the same as those used by
Connection.execute()
.
Here, a Connection
is acquired using the
contextual_connect()
method, and the statement executed
with that connection. The returned ResultProxy
is flagged
such that when the ResultProxy
is exhausted and its
underlying cursor is closed, the Connection
created here
will also be closed, which allows its associated DBAPI connection
resource to be returned to the connection pool.
execution_options
(**opt)¶Return a new Engine
that will provide
Connection
objects with the given execution options.
The returned Engine
remains related to the original
Engine
in that it shares the same connection pool and
other state:
Pool
used by the new Engine
is the
same instance. The Engine.dispose()
method will replace
the connection pool instance for the parent engine as well
as this one.Engine
inherits the events of the parent, and new events can be associated
with the new Engine
individually.Engine
.The intent of the Engine.execution_options()
method is
to implement “sharding” schemes where multiple Engine
objects refer to the same connection pool, but are differentiated
by options that would be consumed by a custom event:
primary_engine = create_engine("mysql://")
shard1 = primary_engine.execution_options(shard_id="shard1")
shard2 = primary_engine.execution_options(shard_id="shard2")
Above, the shard1
engine serves as a factory for
Connection
objects that will contain the execution option
shard_id=shard1
, and shard2
will produce Connection
objects that contain the execution option shard_id=shard2
.
An event handler can consume the above execution option to perform
a schema switch or other operation, given a connection. Below
we emit a MySQL use
statement to switch databases, at the same
time keeping track of which database we’ve established using the
Connection.info
dictionary, which gives us a persistent
storage space that follows the DBAPI connection:
from sqlalchemy import event
from sqlalchemy.engine import Engine
shards = {"default": "base", shard_1: "db1", "shard_2": "db2"}
@event.listens_for(Engine, "before_cursor_execute")
def _switch_shard(conn, cursor, stmt,
params, context, executemany):
shard_id = conn._execution_options.get('shard_id', "default")
current_shard = conn.info.get("current_shard", None)
if current_shard != shard_id:
cursor.execute("use %s" % shards[shard_id])
conn.info["current_shard"] = shard_id
See also
Connection.execution_options()
- update execution options
on a Connection
object.
Engine.update_execution_options()
- update the execution
options for a given Engine
in place.
get_execution_options
()¶Get the non-SQL options which will take effect during execution.
See also
has_table
(table_name, schema=None)¶Return True if the given backend has a table of the given name.
See also
Fine Grained Reflection with Inspector - detailed schema inspection
using the Inspector
interface.
quoted_name
- used to pass quoting information along
with a schema identifier.
raw_connection
(_connection=None)¶Return a “raw” DBAPI connection from the connection pool.
The returned object is a proxied version of the DBAPI
connection object used by the underlying driver in use.
The object will have all the same behavior as the real DBAPI
connection, except that its close()
method will result in the
connection being returned to the pool, rather than being closed
for real.
This method provides direct DBAPI connection access for
special situations when the API provided by Connection
is not needed. When a Connection
object is already
present, the DBAPI connection is available using
the Connection.connection
accessor.
See also
run_callable
(callable_, *args, **kwargs)¶Given a callable object or function, execute it, passing
a Connection
as the first argument.
The given *args and **kwargs are passed subsequent
to the Connection
argument.
This function, along with Connection.run_callable()
,
allows a function to be run with a Connection
or Engine
object without the need to know
which one is being dealt with.
scalar
(statement, *multiparams, **params)¶Executes and returns the first column of the first row.
The underlying cursor is closed after execution.
schema_for_object
= <sqlalchemy.sql.schema._SchemaTranslateMap object>¶Return the “.schema” attribute for an object.
Used for Table
, Sequence
and similar objects,
and takes into account
the Connection.execution_options.schema_translate_map
parameter.
New in version 1.1.
See also
table_names
(schema=None, connection=None)¶Return a list of all table names available in the database.
Parameters: |
---|
transaction
(callable_, *args, **kwargs)¶Execute the given function within a transaction boundary.
The function is passed a Connection
newly procured
from Engine.contextual_connect()
as the first argument,
followed by the given *args and **kwargs.
e.g.:
def do_something(conn, x, y):
conn.execute("some statement", {'x':x, 'y':y})
engine.transaction(do_something, 5, 10)
The operations inside the function are all invoked within the
context of a single Transaction
.
Upon success, the transaction is committed. If an
exception is raised, the transaction is rolled back
before propagating the exception.
Note
The transaction()
method is superseded by
the usage of the Python with:
statement, which can
be used with Engine.begin()
:
with engine.begin() as conn:
conn.execute("some statement", {'x':5, 'y':10})
See also
Engine.begin()
- engine-level transactional
context
Connection.transaction()
- connection-level version of
Engine.transaction()
update_execution_options
(**opt)¶Update the default execution_options dictionary
of this Engine
.
The given keys/values in **opt are added to the
default execution options that will be used for
all connections. The initial contents of this dictionary
can be sent via the execution_options
parameter
to create_engine()
.
sqlalchemy.engine.
ExceptionContext
¶Encapsulate information about an error condition in progress.
This object exists solely to be passed to the
ConnectionEvents.handle_error()
event, supporting an interface that
can be extended without backwards-incompatibility.
New in version 0.9.7.
chained_exception
= None¶The exception that was returned by the previous handler in the exception chain, if any.
If present, this exception will be the one ultimately raised by SQLAlchemy unless a subsequent handler replaces it.
May be None.
connection
= None¶The Connection
in use during the exception.
This member is present, except in the case of a failure when first connecting.
See also
cursor
= None¶The DBAPI cursor object.
May be None.
engine
= None¶The Engine
in use during the exception.
This member should always be present, even in the case of a failure when first connecting.
New in version 1.0.0.
execution_context
= None¶The ExecutionContext
corresponding to the execution
operation in progress.
This is present for statement execution operations, but not for
operations such as transaction begin/end. It also is not present when
the exception was raised before the ExecutionContext
could be constructed.
Note that the ExceptionContext.statement
and
ExceptionContext.parameters
members may represent a
different value than that of the ExecutionContext
,
potentially in the case where a
ConnectionEvents.before_cursor_execute()
event or similar
modified the statement/parameters to be sent.
May be None.
invalidate_pool_on_disconnect
= True¶Represent whether all connections in the pool should be invalidated when a “disconnect” condition is in effect.
Setting this flag to False within the scope of the
ConnectionEvents.handle_error()
event will have the effect such
that the full collection of connections in the pool will not be
invalidated during a disconnect; only the current connection that is the
subject of the error will actually be invalidated.
The purpose of this flag is for custom disconnect-handling schemes where the invalidation of other connections in the pool is to be performed based on other conditions, or even on a per-connection basis.
New in version 1.0.3.
is_disconnect
= None¶Represent whether the exception as occurred represents a “disconnect” condition.
This flag will always be True or False within the scope of the
ConnectionEvents.handle_error()
handler.
SQLAlchemy will defer to this flag in order to determine whether or not the connection should be invalidated subsequently. That is, by assigning to this flag, a “disconnect” event which then results in a connection and pool invalidation can be invoked or prevented by changing this flag.
original_exception
= None¶The exception object which was caught.
This member is always present.
parameters
= None¶Parameter collection that was emitted directly to the DBAPI.
May be None.
sqlalchemy_exception
= None¶The sqlalchemy.exc.StatementError
which wraps the original,
and will be raised if exception handling is not circumvented by the event.
May be None, as not all exception types are wrapped by SQLAlchemy. For DBAPI-level exceptions that subclass the dbapi’s Error class, this field will always be present.
statement
= None¶String SQL statement that was emitted directly to the DBAPI.
May be None.
sqlalchemy.engine.
NestedTransaction
(connection, parent)¶Bases: sqlalchemy.engine.Transaction
Represent a ‘nested’, or SAVEPOINT transaction.
A new NestedTransaction
object may be procured
using the Connection.begin_nested()
method.
The interface is the same as that of Transaction
.
sqlalchemy.engine.
ResultProxy
(context)¶Wraps a DB-API cursor object to provide easier access to row columns.
Individual columns may be accessed by their integer position,
case-insensitive column name, or by schema.Column
object. e.g.:
row = fetchone()
col1 = row[0] # access via integer position
col2 = row['col2'] # access via name
col3 = row[mytable.c.mycol] # access via Column object.
ResultProxy
also handles post-processing of result column
data using TypeEngine
objects, which are referenced from
the originating SQL statement that produced this result set.
_cursor_description
()¶May be overridden by subclasses.
_soft_close
()¶Soft close this ResultProxy
.
This releases all DBAPI cursor resources, but leaves the ResultProxy “open” from a semantic perspective, meaning the fetchXXX() methods will continue to return empty results.
This method is called automatically when:
This method is not public, but is documented in order to clarify the “autoclose” process used.
New in version 1.0.0.
See also
close
()¶Close this ResultProxy.
This closes out the underlying DBAPI cursor corresponding
to the statement execution, if one is still present. Note that the
DBAPI cursor is automatically released when the ResultProxy
exhausts all available rows. ResultProxy.close()
is generally
an optional method except in the case when discarding a
ResultProxy
that still has additional rows pending for fetch.
In the case of a result that is the product of
connectionless execution,
the underlying Connection
object is also closed, which
releases DBAPI connection resources.
After this method is called, it is no longer valid to call upon
the fetch methods, which will raise a ResourceClosedError
on subsequent use.
Changed in version 1.0.0: - the ResultProxy.close()
method
has been separated out from the process that releases the underlying
DBAPI cursor resource. The “auto close” feature of the
Connection
now performs a so-called “soft close”, which
releases the underlying DBAPI cursor, but allows the
ResultProxy
to still behave as an open-but-exhausted
result set; the actual ResultProxy.close()
method is never
called. It is still safe to discard a ResultProxy
that has been fully exhausted without calling this method.
fetchall
()¶Fetch all rows, just like DB-API cursor.fetchall()
.
After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Subsequent calls to ResultProxy.fetchall()
will return
an empty list. After the ResultProxy.close()
method is
called, the method will raise ResourceClosedError
.
Changed in version 1.0.0: - Added “soft close” behavior which
allows the result to be used in an “exhausted” state prior to
calling the ResultProxy.close()
method.
fetchmany
(size=None)¶Fetch many rows, just like DB-API
cursor.fetchmany(size=cursor.arraysize)
.
After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Calls to ResultProxy.fetchmany()
after all rows have been
exhausted will return
an empty list. After the ResultProxy.close()
method is
called, the method will raise ResourceClosedError
.
Changed in version 1.0.0: - Added “soft close” behavior which
allows the result to be used in an “exhausted” state prior to
calling the ResultProxy.close()
method.
fetchone
()¶Fetch one row, just like DB-API cursor.fetchone()
.
After all rows have been exhausted, the underlying DBAPI cursor resource is released, and the object may be safely discarded.
Calls to ResultProxy.fetchone()
after all rows have
been exhausted will return None
.
After the ResultProxy.close()
method is
called, the method will raise ResourceClosedError
.
Changed in version 1.0.0: - Added “soft close” behavior which
allows the result to be used in an “exhausted” state prior to
calling the ResultProxy.close()
method.
first
()¶Fetch the first row and then close the result set unconditionally.
Returns None if no row is present.
After calling this method, the object is fully closed,
e.g. the ResultProxy.close()
method will have been called.
inserted_primary_key
¶Return the primary key for the row just inserted.
The return value is a list of scalar values corresponding to the list of primary key columns in the target table.
This only applies to single row insert()
constructs which did not explicitly specify
Insert.returning()
.
Note that primary key columns which specify a
server_default clause,
or otherwise do not qualify as “autoincrement”
columns (see the notes at Column
), and were
generated using the database-side default, will
appear in this list as None
unless the backend
supports “returning” and the insert statement executed
with the “implicit returning” enabled.
Raises InvalidRequestError
if the executed
statement is not a compiled expression construct
or is not an insert() construct.
is_insert
¶True if this ResultProxy
is the result
of a executing an expression language compiled
expression.insert()
construct.
When True, this implies that the
inserted_primary_key
attribute is accessible,
assuming the statement did not include
a user defined “returning” construct.
keys
()¶Return the current set of string keys for rows.
last_inserted_params
()¶Return the collection of inserted parameters from this execution.
Raises InvalidRequestError
if the executed
statement is not a compiled expression construct
or is not an insert() construct.
last_updated_params
()¶Return the collection of updated parameters from this execution.
Raises InvalidRequestError
if the executed
statement is not a compiled expression construct
or is not an update() construct.
lastrow_has_defaults
()¶Return lastrow_has_defaults()
from the underlying
ExecutionContext
.
See ExecutionContext
for details.
lastrowid
¶return the ‘lastrowid’ accessor on the DBAPI cursor.
This is a DBAPI specific method and is only functional for those backends which support it, for statements where it is appropriate. It’s behavior is not consistent across backends.
Usage of this method is normally unnecessary when
using insert() expression constructs; the
inserted_primary_key
attribute provides a
tuple of primary key values for a newly inserted row,
regardless of database backend.
next
()¶Implement the next() protocol.
New in version 1.2.
postfetch_cols
()¶Return postfetch_cols()
from the underlying
ExecutionContext
.
See ExecutionContext
for details.
Raises InvalidRequestError
if the executed
statement is not a compiled expression construct
or is not an insert() or update() construct.
prefetch_cols
()¶Return prefetch_cols()
from the underlying
ExecutionContext
.
See ExecutionContext
for details.
Raises InvalidRequestError
if the executed
statement is not a compiled expression construct
or is not an insert() or update() construct.
returned_defaults
¶Return the values of default columns that were fetched using
the ValuesBase.return_defaults()
feature.
The value is an instance of RowProxy
, or None
if ValuesBase.return_defaults()
was not used or if the
backend does not support RETURNING.
New in version 0.9.0.
See also
returns_rows
¶True if this ResultProxy
returns rows.
I.e. if it is legal to call the methods
fetchone()
,
fetchmany()
fetchall()
.
rowcount
¶Return the ‘rowcount’ for this result.
The ‘rowcount’ reports the number of rows matched by the WHERE criterion of an UPDATE or DELETE statement.
Note
Notes regarding ResultProxy.rowcount
:
ResultProxy.rowcount
is only useful in conjunction
with an UPDATE or DELETE statement. Contrary to what the Python
DBAPI says, it does not return the
number of rows available from the results of a SELECT statement
as DBAPIs cannot support this functionality when rows are
unbuffered.ResultProxy.rowcount
may not be fully implemented by
all dialects. In particular, most DBAPIs do not support an
aggregate rowcount result from an executemany call.
The ResultProxy.supports_sane_rowcount()
and
ResultProxy.supports_sane_multi_rowcount()
methods
will report from the dialect if each usage is known to be
supported.scalar
()¶Fetch the first column of the first row, and close the result set.
Returns None if no row is present.
After calling this method, the object is fully closed,
e.g. the ResultProxy.close()
method will have been called.
supports_sane_multi_rowcount
()¶Return supports_sane_multi_rowcount
from the dialect.
See ResultProxy.rowcount
for background.
supports_sane_rowcount
()¶Return supports_sane_rowcount
from the dialect.
See ResultProxy.rowcount
for background.
sqlalchemy.engine.
RowProxy
(parent, row, processors, keymap)¶Bases: sqlalchemy.engine.BaseRowProxy
Proxy values from a single cursor row.
Mostly follows “ordered dictionary” behavior, mapping result values to the string-based column name, the integer position of the result in the row, as well as Column instances which can be mapped to the original Columns that produced this result set (for results that correspond to constructed SQL expressions).
has_key
(key)¶Return True if this RowProxy contains the given key.
items
()¶Return a list of tuples, each tuple containing a key/value pair.
keys
()¶Return the list of keys as strings represented by this RowProxy.
sqlalchemy.engine.
Transaction
(connection, parent)¶Represent a database transaction in progress.
The Transaction
object is procured by
calling the begin()
method of
Connection
:
from sqlalchemy import create_engine
engine = create_engine("postgresql://scott:tiger@localhost/test")
connection = engine.connect()
trans = connection.begin()
connection.execute("insert into x (a, b) values (1, 2)")
trans.commit()
The object provides rollback()
and commit()
methods in order to control transaction boundaries. It
also implements a context manager interface so that
the Python with
statement can be used with the
Connection.begin()
method:
with connection.begin():
connection.execute("insert into x (a, b) values (1, 2)")
The Transaction object is not threadsafe.
close
()¶Close this Transaction
.
If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns.
This is used to cancel a Transaction without affecting the scope of an enclosing transaction.
commit
()¶Commit this Transaction
.
rollback
()¶Roll back this Transaction
.
sqlalchemy.engine.
TwoPhaseTransaction
(connection, xid)¶Bases: sqlalchemy.engine.Transaction
Represent a two-phase transaction.
A new TwoPhaseTransaction
object may be procured
using the Connection.begin_twophase()
method.
The interface is the same as that of Transaction
with the addition of the prepare()
method.
prepare
()¶Prepare this TwoPhaseTransaction
.
After a PREPARE, the transaction can be committed.