Most python(3)esque way to repeatedly SELECT from MySQL database -
i have csv file of customer ids (crm_id
). need primary keys (an autoincrement int) customers table of database. (i can't assured of integrity of crm_id
s chose not make primary key).
so:
customers = [] open("crm_ids.csv", 'r', newline='') csvfile: customerfile = csv.dictreader(csvfile, delimiter = ',', quotechar='"', skipinitialspace=true) #only 1 "crm_id" field per row customers = [c c in customerfile]
so far good? think pythonesque way of doing (but happy hear otherwise).
now comes ugly code. works, hate appending list because has copy , reallocate memory each loop, right? there better way (pre-allocate + enumerate keep track of index comes mind, maybe there's quickler/better way being clever sql not several thousand separate queries...)?
cnx = mysql.connector.connect(user='me', password=sys.argv[1], host="localhost", database="mydb") cursor = cnx.cursor() select_customer = ("select id customers crm_id = %(crm_id)s limit 1;") c_ids = [] row in customers: cursor.execute(select_customer, row) #note fetchone() returns tuple, selected set #only has single column need column [0] c_ids.extend(cursor.fetchall()) c_ids = [c[0] c in c_ids]
edit: purpose primary keys in list can use these allocate other data other csv files in linked tables (the customer id primary key foreign key these other tables, , allocation algorithm changes, it's best have flexibility allocation in python rather hard coding sql queries). know sounds little backwards, "client" works spreadsheets rather erp/plm, have build "relations" small app myself.
what changing query want?
crm_ids = ",".join(customers) select_customer = "select unique id customers crm_id in (%s);" % crm_ids
mysql should fine multi-megabyte query, according the manual; if gets really long list, can break - 2 or 3 queries guaranteed faster few thousand.
Comments
Post a Comment