batch_save¶
- async AsyncJournalTemplate.batch_save(main, body, columns=None, chunksize=1000)¶
批量保存日记账头和日记账体数据
- 参数
示例
初始化日记账元素
from deepfos.element.journal_template import JournalTemplate jt = JournalTemplate('Journal_elimadj')
准备日记账头和日记账体数据
main = pd.DataFrame([ {"journal_name": "foo", "year": "2022", "period": "1", "entity": "JH", "rate": 1}, {"journal_name": "bar", "year": "2021", "period": "2", "entity": "JH", "rate": 2} ]) body = pd.DataFrame([ {"ICP": "HC", "account": "101", "rate": 1, "original_credit": 222, "credit": 222}, {"ICP": "HCX", "account": "o1", "rate": 2, "original_credit": 111, "credit": 111}, {"ICP": "HC", "account": "102", "rate": 1, "original_credit": 222, "credit": 222}, {"ICP": "HCX", "account": "o2", "rate": 2, "original_credit": 111, "credit": 111} ] )
在传入 batch_save 方法前,通过设定日记账头和日记账体的index的方式准备关联关系,然后传入 batch_save 方法
main = main.set_index(['rate'], drop=False) body = body.set_index(['rate'], drop=False) jt.batch_save(main, body)
对于关联列为相同列名时,亦可直接在 batch_save 的入参中提供列信息,将以该列信息做后续关联
jt.batch_save(main, body, columns=['rate'])
这两种写法在当前例子中等价
保存的batchData:
[ JournalData( mainData=JournalMainData( mainActualTableName='main_table_actual_name', data={'journal_name': 'foo', 'entity': 'JH', 'period': '1', 'year': '2022'} ), bodyData=JournalBodyData( bodyActualTableName='body_table_actual_name', data=[ {'original_credit': '222', 'account': '101', 'ICP': 'HC', 'credit': '222', 'rate': '1'}, {'original_credit': '222', 'account': '102', 'ICP': 'HC', 'credit': '222', 'rate': '1'} ] ) ), JournalData( mainData=JournalMainData( mainActualTableName='main_table_actual_name', data={'journal_name': 'bar', 'entity': 'JH', 'period': '2', 'year': '2021'} ), bodyData=JournalBodyData( bodyActualTableName='body_table_actual_name', data=[ {'original_credit': '111', 'account': 'o1', 'ICP': 'HCX', 'credit': '111', 'rate': '2'}, {'original_credit': '111', 'account': 'o2', 'ICP': 'HCX', 'credit': '111', 'rate': '2'} ] ) ) ]