Write Large Pandas DataFrames to SQL Server database Write Large Pandas DataFrames to SQL Server database pandas pandas

Write Large Pandas DataFrames to SQL Server database


I've got some sad news for you, SQLAlchemy actually doesn't implement bulk imports for SQL Server, it's actually just going to do the same slow individual INSERT statements that to_sql is doing. I would say that your best bet is to try and script something up using the bcp command line tool. Here is a script that I've used in the past, but no guarantees:

from subprocess import check_output, callimport pandas as pdimport numpy as npimport ospad = 0.1tablename = 'sandbox.max.pybcp_test'overwrite=Trueraise_exception = Trueserver = 'P01'trusted_connection= Trueusername=Nonepassword=Nonedelimiter='|'df = pd.read_csv('D:/inputdata.csv', encoding='latin', error_bad_lines=False)def get_column_def_sql(col):   if col.dtype == object:      width = col.str.len().max() * (1+pad)      return '[{}] varchar({})'.format(col.name, int(width))    elif np.issubdtype(col.dtype, float):      return'[{}] float'.format(col.name)    elif np.issubdtype(col.dtype, int):      return '[{}] int'.format(col.name)    else:      if raise_exception:         raise NotImplementedError('data type {} not implemented'.format(col.dtype))      else:         print('Warning: cast column {} as varchar; data type {} not implemented'.format(col, col.dtype))         width = col.str.len().max() * (1+pad)         return '[{}] varchar({})'.format(col.name, int(width)) def create_table(df, tablename, server, trusted_connection, username, password, pad):             if trusted_connection:       login_string = '-E'    else:       login_string = '-U {} -P {}'.format(username, password)    col_defs = []    for col in df:       col_defs += [get_column_def_sql(df[col])]    query_string = 'CREATE TABLE {}\n({})\nGO\nQUIT'.format(tablename, ',\n'.join(col_defs))           if overwrite == True:       query_string = "IF OBJECT_ID('{}', 'U') IS NOT NULL DROP TABLE {};".format(tablename, tablename) + query_string    query_file = 'c:\\pybcp_tempqueryfile.sql'    with open (query_file,'w') as f:       f.write(query_string)    if trusted_connection:       login_string = '-E'    else:       login_string = '-U {} -P {}'.format(username, password)    o = call('sqlcmd -S {} {} -i {}'.format(server, login_string, query_file), shell=True)    if o != 0:       raise BaseException("Failed to create table")   # o = call('del {}'.format(query_file), shell=True)def call_bcp(df, tablename):       if trusted_connection:       login_string = '-T'    else:       login_string = '-U {} -P {}'.format(username, password)    temp_file = 'c:\\pybcp_tempqueryfile.csv'    #remove the delimiter and change the encoding of the data frame to latin so sql server can read it    df.loc[:,df.dtypes == object] = df.loc[:,df.dtypes == object].apply(lambda col: col.str.replace(delimiter,'').str.encode('latin'))    df.to_csv(temp_file, index = False, sep = '|', errors='ignore')    o = call('bcp sandbox.max.pybcp_test2 in c:\pybcp_tempqueryfile.csv -S "localhost" -T -t^| -r\n -c')


This just recently been updated as of SQLAchemy ver: 1.3.0 just in case anyone else needs to know. Should make your dataframe.to_sql statement much faster.

https://docs.sqlalchemy.org/en/latest/changelog/migration_13.html#support-for-pyodbc-fast-executemany

engine = create_engine( "mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server", fast_executemany=True)