The cursor class¶
- 
class cursor¶
- Allows Python code to execute PostgreSQL command in a database session. Cursors are created by the - connection.cursor()method: they are bound to the connection for the entire lifetime and all the commands are executed in the context of the database session wrapped by the connection.- Cursors created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors. Cursors created from different connections can or can not be isolated, depending on the connections’ isolation level. See also - rollback()and- commit()methods.- Cursors are not thread safe: a multithread application can create many cursors from the same connection and should use each cursor from a single thread. See Thread and process safety for details. - 
description¶
- Read-only attribute describing the result of a query. It is a sequence of - Columninstances, each one describing one result column in order. The attribute is- Nonefor operations that do not return rows or if the cursor has not had an operation invoked via the- execute*()methods yet.- For compatibility with the DB-API, every object can be unpacked as a 7-items sequence: the attributes retuned this way are the following. For further details and other attributes available check the - Columndocumentation.- name: the name of the column returned.
- type_code: the PostgreSQL OID of the column.
- display_size: the actual length of the column in bytes.
- internal_size: the size in bytes of the column associated to this column on the server.
- precision: total number of significant digits in columns of type- NUMERIC.- Nonefor other types.
- scale: count of decimal digits in the fractional part in columns of type- NUMERIC.- Nonefor other types.
- null_ok: always- Noneas not easy to retrieve from the libpq.
 - Changed in version 2.4: if possible, columns descriptions are named tuple instead of regular tuples. - Changed in version 2.8: columns descriptions are instances of - Column, exposing extra attributes.
 - 
close()¶
- Close the cursor now (rather than whenever - delis executed). The cursor will be unusable from this point forward; an- InterfaceErrorwill be raised if any operation is attempted with the cursor.- Changed in version 2.5: if the cursor is used in a - withstatement, the method is automatically called at the end of the- withblock.
 - 
closed¶
- Read-only boolean attribute: specifies if the cursor is closed ( - True) or not (- False).- DB API extension - The - closedattribute is a Psycopg extension to the DB API 2.0.- New in version 2.0.7. 
 - 
connection¶
- Read-only attribute returning a reference to the - connectionobject on which the cursor was created.
 - 
name¶
- Read-only attribute containing the name of the cursor if it was creates as named cursor by - connection.cursor(), or- Noneif it is a client side cursor. See Server side cursors.- DB API extension - The - nameattribute is a Psycopg extension to the DB API 2.0.
 - 
scrollable¶
- Read/write attribute: specifies if a named cursor is declared - SCROLL, hence is capable to scroll backwards (using- scroll()). If- True, the cursor can be scrolled backwards, if- Falseit is never scrollable. If- None(default) the cursor scroll option is not specified, usually but not always meaning no backward scroll (see the- DECLAREnotes).- Note - set the value before calling - execute()or use the- connection.cursor()scrollable parameter, otherwise the value will have no effect.- New in version 2.5. - DB API extension - The - scrollableattribute is a Psycopg extension to the DB API 2.0.
 - 
withhold¶
- Read/write attribute: specifies if a named cursor lifetime should extend outside of the current transaction, i.e., it is possible to fetch from the cursor even after a - connection.commit()(but not after a- connection.rollback()). See Server side cursors- Note - set the value before calling - execute()or use the- connection.cursor()withhold parameter, otherwise the value will have no effect.- New in version 2.4.3. - DB API extension - The - withholdattribute is a Psycopg extension to the DB API 2.0.
 - Commands execution methods - 
execute(query, vars=None)¶
- Execute a database operation (query or command). - Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified either with positional ( - %s) or named (- %(name)s) placeholders. See Passing parameters to SQL queries.- The method returns - None. If a query was executed, the returned values can be retrieved using- fetch*()methods.
 - 
executemany(query, vars_list)¶
- Execute a database operation (query or command) against all parameter tuples or mappings found in the sequence vars_list. - The function is mostly useful for commands that update the database: any result set returned by the query is discarded. - Parameters are bounded to the query using the same rules described in the - execute()method.- Warning - In its current implementation this method is not faster than executing - execute()in a loop. For better performance you can use the functions described in Fast execution helpers.
 - 
callproc(procname[, parameters])¶
- Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. Overloaded procedures are supported. Named parameters can be used by supplying the parameters as a dictionary. - This function is, at present, not DBAPI-compliant. The return value is supposed to consist of the sequence of parameters with modified output and input/output parameters. In future versions, the DBAPI-compliant return value may be implemented, but for now the function returns None. - The procedure may provide a result set as output. This is then made available through the standard - fetch*()methods.- Changed in version 2.7: added support for named arguments. 
 - 
mogrify(operation[, parameters])¶
- Return a query string after arguments binding. The string returned is exactly the one that would be sent to the database running the - execute()method or similar.- The returned string is always a bytes string. - >>> cur.mogrify("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) "INSERT INTO test (num, data) VALUES (42, E'bar')" - DB API extension - The - mogrify()method is a Psycopg extension to the DB API 2.0.
 - 
setinputsizes(sizes)¶
- This method is exposed in compliance with the DB API 2.0. It currently does nothing but it is safe to call it. 
 - Results retrieval methods - The following methods are used to read data from the database after an - execute()call.- Note - cursorobjects are iterable, so, instead of calling explicitly- fetchone()in a loop, the object itself can be used:- >>> cur.execute("SELECT * FROM test;") >>> for record in cur: ... print record ... (1, 100, "abc'def") (2, None, 'dada') (3, 42, 'bar') - Changed in version 2.4: iterating over a named cursor fetches - itersizerecords at time from the backend. Previously only one record was fetched per roundtrip, resulting in a large overhead.- 
fetchone()¶
- Fetch the next row of a query result set, returning a single tuple, or - Nonewhen no more data is available:- >>> cur.execute("SELECT * FROM test WHERE id = %s", (3,)) >>> cur.fetchone() (3, 42, 'bar') - A - ProgrammingErroris raised if the previous call to- execute*()did not produce any result set or no call was issued yet.
 - 
fetchmany([size=cursor.arraysize])¶
- Fetch the next set of rows of a query result, returning a list of tuples. An empty list is returned when no more rows are available. - The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s - arraysizedetermines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned:- >>> cur.execute("SELECT * FROM test;") >>> cur.fetchmany(2) [(1, 100, "abc'def"), (2, None, 'dada')] >>> cur.fetchmany(2) [(3, 42, 'bar')] >>> cur.fetchmany(2) [] - A - ProgrammingErroris raised if the previous call to- execute*()did not produce any result set or no call was issued yet.- Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the - arraysizeattribute. If the size parameter is used, then it is best for it to retain the same value from one- fetchmany()call to the next.
 - 
fetchall()¶
- Fetch all (remaining) rows of a query result, returning them as a list of tuples. An empty list is returned if there is no more record to fetch. - >>> cur.execute("SELECT * FROM test;") >>> cur.fetchall() [(1, 100, "abc'def"), (2, None, 'dada'), (3, 42, 'bar')] - A - ProgrammingErroris raised if the previous call to- execute*()did not produce any result set or no call was issued yet.
 - 
scroll(value[, mode='relative'])¶
- Scroll the cursor in the result set to a new position according to mode. - If - modeis- relative(default), value is taken as offset to the current position in the result set, if set to- absolute, value states an absolute target position.- If the scroll operation would leave the result set, a - ProgrammingErroris raised and the cursor position is not changed.- Note - According to the DB API 2.0, the exception raised for a cursor out of bound should have been - IndexError. The best option is probably to catch both exceptions in your code:- try: cur.scroll(1000 * 1000) except (ProgrammingError, IndexError), exc: deal_with_it(exc) - The method can be used both for client-side cursors and server-side cursors. Server-side cursors can usually scroll backwards only if declared - scrollable. Moving out-of-bound in a server-side cursor doesn’t result in an exception, if the backend doesn’t raise any (Postgres doesn’t tell us in a reliable way if we went out of bound).
 - 
arraysize¶
- This read/write attribute specifies the number of rows to fetch at a time with - fetchmany(). It defaults to 1 meaning to fetch a single row at a time.
 - 
itersize¶
- Read/write attribute specifying the number of rows to fetch from the backend at each network roundtrip during iteration on a named cursor. The default is 2000. - New in version 2.4. - DB API extension - The - itersizeattribute is a Psycopg extension to the DB API 2.0.
 - 
rowcount¶
- This read-only attribute specifies the number of rows that the last - execute*()produced (for DQL statements like- SELECT) or affected (for DML statements like- UPDATEor- INSERT).- The attribute is -1 in case no - execute*()has been performed on the cursor or the row count of the last operation if it can’t be determined by the interface.- Note - The DB API 2.0 interface reserves to redefine the latter case to have the object return - Noneinstead of -1 in future versions of the specification.
 - 
rownumber¶
- This read-only attribute provides the current 0-based index of the cursor in the result set or - Noneif the index cannot be determined.- The index can be seen as index of the cursor in a sequence (the result set). The next fetch operation will fetch the row indexed by - rownumberin that sequence.
 - 
lastrowid¶
- This read-only attribute provides the OID of the last row inserted by the cursor. If the table wasn’t created with OID support or the last operation is not a single record insert, the attribute is set to - None.- Note - PostgreSQL currently advices to not create OIDs on the tables and the default for - CREATE TABLEis to not support them. The- INSERT ... RETURNINGsyntax available from PostgreSQL 8.3 allows more flexibility.
 - 
query¶
- Read-only attribute containing the body of the last query sent to the backend (including bound arguments) as bytes string. - Noneif no query has been executed yet:- >>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.query "INSERT INTO test (num, data) VALUES (42, E'bar')" - DB API extension - The - queryattribute is a Psycopg extension to the DB API 2.0.
 - 
statusmessage¶
- Read-only attribute containing the message returned by the last command: - >>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.statusmessage 'INSERT 0 1' - DB API extension - The - statusmessageattribute is a Psycopg extension to the DB API 2.0.
 - 
cast(oid, s)¶
- Convert a value from the PostgreSQL string representation to a Python object. - Use the most specific of the typecasters registered by - register_type().- New in version 2.4. - DB API extension - The - cast()method is a Psycopg extension to the DB API 2.0.
 - 
tzinfo_factory¶
- The time zone factory used to handle data types such as - TIMESTAMP WITH TIME ZONE. It should be a- tzinfoobject. A few implementations are available in the- psycopg2.tzmodule.
 - 
nextset()¶
- This method is not supported (PostgreSQL does not have multiple data sets) and will raise a - NotSupportedErrorexception.
 - 
setoutputsize(size[, column])¶
- This method is exposed in compliance with the DB API 2.0. It currently does nothing but it is safe to call it. 
 - COPY-related methods - Efficiently copy data from file-like objects to the database and back. See Using COPY TO and COPY FROM for an overview. - DB API extension - The - COPYcommand is a PostgreSQL extension to the SQL standard. As such, its support is a Psycopg extension to the DB API 2.0.- 
copy_from(file, table, sep='\t', null='\\N', size=8192, columns=None)¶
- Read data from the file-like object file appending them to the table named table. - Parameters: - file – file-like object to read data from.  It must have both
read()andreadline()methods.
- table – name of the table to copy data into.
- sep – columns separator expected in the file. Defaults to a tab.
- null – textual representation of NULLin the file. The default is the two characters string\N.
- size – size of the buffer used to read from the file.
- columns – iterable with name of the columns to import. The length and types should match the content of the file to read. If not specified, it is assumed that the entire table matches the file structure.
 - Example: - >>> f = StringIO("42\tfoo\n74\tbar\n") >>> cur.copy_from(f, 'test', columns=('num', 'data')) >>> cur.execute("select * from test where id > 5;") >>> cur.fetchall() [(6, 42, 'foo'), (7, 74, 'bar')] - Note - the name of the table is not quoted: if the table name contains uppercase letters or special characters it must be quoted with double quotes: - cur.copy_from(f, '"TABLE"') - Changed in version 2.0.6: added the columns parameter. - Changed in version 2.4: data read from files implementing the - io.TextIOBaseinterface are encoded in the connection- encodingwhen sent to the backend.
- file – file-like object to read data from.  It must have both
 - 
copy_to(file, table, sep='\t', null='\\N', columns=None)¶
- Write the content of the table named table to the file-like object file. See Using COPY TO and COPY FROM for an overview. - Parameters: - file – file-like object to write data into.  It must have a
write()method.
- table – name of the table to copy data from.
- sep – columns separator expected in the file. Defaults to a tab.
- null – textual representation of NULLin the file. The default is the two characters string\N.
- columns – iterable with name of the columns to export. If not specified, export all the columns.
 - Example: - >>> cur.copy_to(sys.stdout, 'test', sep="|") 1|100|abc'def 2|\N|dada ... - Note - the name of the table is not quoted: if the table name contains uppercase letters or special characters it must be quoted with double quotes: - cur.copy_to(f, '"TABLE"') - Changed in version 2.0.6: added the columns parameter. - Changed in version 2.4: data sent to files implementing the - io.TextIOBaseinterface are decoded in the connection- encodingwhen read from the backend.
- file – file-like object to write data into.  It must have a
 - 
copy_expert(sql, file, size=8192)¶
- Submit a user-composed - COPYstatement. The method is useful to handle all the parameters that PostgreSQL makes available (see- COPYcommand documentation).- Parameters: - sql – the COPYstatement to execute.
- file – a file-like object to read or write (according to sql).
- size – size of the read buffer to be used in COPY FROM.
 - The sql statement should be in the form - COPY table TO STDOUTto export- tableto the file object passed as argument or- COPY table FROM STDINto import the content of the file object into- table. If you need to compose a- COPYstatement dynamically (because table, fields, or query parameters are in Python variables) you may use the objects provided by the- psycopg2.sqlmodule.- file must be a readable file-like object (as required by - copy_from()) for sql statement- COPY ... FROM STDINor a writable one (as required by- copy_to()) for- COPY ... TO STDOUT.- Example: - >>> cur.copy_expert("COPY test TO STDOUT WITH CSV HEADER", sys.stdout) id,num,data 1,100,abc'def 2,,dada ... - New in version 2.0.6. - Changed in version 2.4: files implementing the - io.TextIOBaseinterface are dealt with using Unicode data instead of bytes.
- sql – the 
 - Interoperation with other C API modules - 
pgresult_ptr¶
- Return the cursor’s internal - PGresult*as integer. Useful to pass the libpq raw result structure to C functions, e.g. via- ctypes:- >>> import ctypes >>> libpq = ctypes.pydll.LoadLibrary(ctypes.util.find_library('pq')) >>> libpq.PQcmdStatus.argtypes = [ctypes.c_void_p] >>> libpq.PQcmdStatus.restype = ctypes.c_char_p >>> curs.execute("select 'x'") >>> libpq.PQcmdStatus(curs.pgresult_ptr) b'SELECT 1' - New in version 2.8. 
 
-