The Bulk Load Wizard is a tool to help you take a local data file, upload it to cloud object storage and then load the data to a table.
🔎 NOTE: Aginity currently supports uploading to Amazon Redshift and Snowflake. Aginity will be adding support for other database platforms in future releases.
To start the data import wizard go to the [Data Upload] menu as shown below.
How choosing a file?
When you start the wizard you will presented with a six-step process in which to identify a file to upload, describe the behavior you want and then perform the load to a table. In the first screen the following activities are performed.
- Pick a database connection where you will upload data to.
- Add a local file, either CSV or Excel, using the [Add File] button.
- Confirm the file type
- For CSV files, choose a delimiter used to separate fields in the file.
For Excel files, choose the sheet with data to be uploaded.
- Select the Date and Time Formats
- If you click the [With header] check box the parser will assume the first row in the file is a header if not it will treat the first row as a data row.
- You can click Update preview to show the first few rows in the file
🔎 NOTE: Also you can use the Data Upload Template by clicking on the drop-down [Upload Template]. See Data Upload Template documentation.
When complete choose Next.
In the next screen, you will provide your security credentials specific to the object storage you are uploading to. Reference the sections below for specific information on each platform supported.
How upload a file?
When working with Amazon Redshift or Snowflake the file you upload will first be moved to a S3 bucket of your choosing prior to being loaded into a Redshift table. You must provide the following items.
- Object Store
- Your Access Key
- Your Secret Key
- The Region Endpoint
The second section lets you control the behavior of the file you upload to S3.
- If you select the checkbox labeled [Leave the intermediate file on the cloud after importing data] you will leave a version of the file on S3 (azure blob storage is also an option)
- You can enter a filename of choice for the file.
- If you want the file compressed in S3 you can choose from NONE, ZIP or GZIP options.
- You can control file behavior if the file exists. You can either overwrite or abort the process.
🔎 NOTE: If you need information about check-box [Include sensitive Data into Upload Template] see the Data Upload Template documentation
🔎 NOTE: Please refer to the AWS Bucket Naming conventions article for proper naming standard.
How chose columns and rows?
In step 3, you can choose a numeric limiter for both the number of columns you want to parse and then the number of rows to load. This step is valuable if you have very wide and/or very deep files and you want to work with less columns or less rows.
🔎 NOTE: It is helpful with very large files to load the first 100 rows to ensure everything works well versus letting it run for an hour only to find out it didn’t work.
How choosing columns to include?
This step may seem redundant to the prior step for columns but it allows you to better select individual columns. Let’s say you have 200 columns and you know you only want the first 10 in Step 3 you can choose columns 1 to 10 and in Step 4 you can better preview and determine the order of the columns you want to load.
How configuring output columns?
The next step lets you control the column data type and nullability.
How setting output details and loads?In step 6 you will supply the database you’ll load to, schema, table name, and other behaviors that will be followed during a load execution.
When addicting data is finished, need to click Start Upload for uploading data to the table.
In the left corner we can see status about success or failure for the star of insert the process; and in sometime you will see second message about success or failure for result of the data insert.
Also you can go to Job Monitor and see more details about Data Upload status. Select the specific task you are interested in and click on 'Job Details' to see detailed error messages and statistics.