pandas to_sql all columns as nvarchar pandas to_sql all columns as nvarchar pandas pandas

pandas to_sql all columns as nvarchar


To use dtype, pass a dictionary keyed to each data frame column with corresponding sqlalchemy types. Change keys to actual data frame column names:

import sqlalchemyimport pandas as pd...column_errors.to_sql('load_errors',push_conn,                       if_exists = 'append',                       index = False,                       dtype={'datefld': sqlalchemy.DateTime(),                              'intfld':  sqlalchemy.types.INTEGER(),                             'strfld': sqlalchemy.types.NVARCHAR(length=255)                             'floatfld': sqlalchemy.types.Float(precision=3, asdecimal=True)                             'booleanfld': sqlalchemy.types.Boolean})

You may even be able to dynamically create this dtype dictionary given you do not know column names or types beforehand:

def sqlcol(dfparam):            dtypedict = {}    for i,j in zip(dfparam.columns, dfparam.dtypes):        if "object" in str(j):            dtypedict.update({i: sqlalchemy.types.NVARCHAR(length=255)})                                         if "datetime" in str(j):            dtypedict.update({i: sqlalchemy.types.DateTime()})        if "float" in str(j):            dtypedict.update({i: sqlalchemy.types.Float(precision=3, asdecimal=True)})        if "int" in str(j):            dtypedict.update({i: sqlalchemy.types.INT()})    return dtypedictoutputdict = sqlcol(df)    column_errors.to_sql('load_errors',                      push_conn,                      if_exists = 'append',                      index = False,                      dtype = outputdict)


You can create this dict dynamically if you do not know the column names in advance:

from sqlalchemy.types import NVARCHARdf.to_sql(...., dtype={col_name: NVARCHAR for col_name in df})

Note that you have to pass the sqlalchemy type object itself (or an instance to specify parameters like NVARCHAR(length=10)) and not a string as in your example.