Introduction:
Some times we need the flexibility to upload the data to a table from excel sheet. The can be implemented very well in JSP and other application. I was having this requirement to implement the same in Oracle Application Express (APEX).
I was going though one of the blog by Ittichai, for the same and he has given a very clean steps for getting this work. I followed the steps and it worked for me. I have documented the similar steps below as per my understanding. Below are the details.
Implementing data upload using excel sheet
Following is the table detail which I want to populate from data in excel sheet.
SQL> desc data_upld
Name Null? Type
—————————————– ——– —————————-
SR_NO NUMBER
CODE_ID NOT NULL VARCHAR2(20)
CODE_CTGY NUMBER
CTGY_NAME VARCHAR2(20)
CTGY_DESC VARCHAR2(20)
ITEM_NAME VARCHAR2(100)
ITEM_DESC VARCHAR2(512)
ITEM_UNIT VARCHAR2(20)
ACTUL_COST NUMBER
DIS_PER NUMBER
SCHM_ID NUMBER
1) Created a “File Browse” item on a page and having internal name it P35_UPLOAD.
2) Created a button having internal name as “Upload“. This button is used to upload and process the data in the excel sheet.
3) Created a conditional branch when upload button is pressed, it should branch to the same page (35 in my case)
4) When we select an excel sheet using browse button and click on submit button, it will branch to same page (indirectly we are saying that its going to refresh the page). In such case even if there is no code written in the backend, APEX is going to load the excel sheet in a BLOB format into a table wwv_flow_files.
This is the internal table used by APEX. Our task is to read this table and get the required row which got inserted into this table. With each upload 1 row will get inserted into the table. One of the column of the table (BLOB_CONTENT) is BLOB where the actual excel sheet is uploaded. All other columns are metadata about excel.
When we upload an excel sheet, a random name will get generated for the file in the form FXXXXX/<file_name_you_uploaded>. You can use this to get the file details. Also when you refresh the page, it will upload the file to this table wwv_flow_files, but as soon as refresh completed and it displays the page back again, that row will get deleted again. So you have to get that row processed immediately using the code that you will write when submit button is pressed.
In my case, I want to upload the rows into data_upld table. My excel sheet content looks as shown below.
I want to upload the above 6 rows.
5) Following is the code for the same
Before using the below code, make sure you have a function hex_to_decimal in your database. The code for the same is as given below.
create or replace function hex_to_decimal --this function is based on one by Connor McDonald --http://www.jlcomp.demon.co.uk/faq/base_convert.html ( p_hex_str in varchar2 ) return number is v_dec number; v_hex varchar2(16) := '0123456789ABCDEF'; begin v_dec := 0; for indx in 1 .. length(p_hex_str) loop v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1; end loop; return v_dec; end hex_to_decimal;
Below is the code for uploading the excel sheet.
DECLARE v_blob_data BLOB; v_blob_len NUMBER; v_position NUMBER; v_raw_chunk RAW(10000); v_char CHAR(1); c_chunk_len number := 1; v_line VARCHAR2 (32767) := NULL; v_data_array wwv_flow_global.vc_arr2; v_rows number; v_sr_no number := 1; BEGIN delete from data_upld; -- Read data from wwv_flow_files</span> select blob_content into v_blob_data from wwv_flow_files where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER); v_blob_len := dbms_lob.getlength(v_blob_data); v_position := 1; -- Read and convert binary to char</span> WHILE ( v_position <= v_blob_len ) LOOP v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk))); v_line := v_line || v_char; v_position := v_position + c_chunk_len; -- When a whole line is retrieved </span> IF v_char = CHR(10) THEN -- Convert comma to : to use wwv_flow_utilities </span> v_line := REPLACE (v_line, ',', ':'); -- Convert each column separated by : into array of data </span> v_data_array := wwv_flow_utilities.string_to_table (v_line); -- Insert data into target table </span> EXECUTE IMMEDIATE 'insert into DATA_UPLD (SR_NO, CODE_ID, CODE_CTGY, CTGY_NAME, CTGY_DESC, ITEM_NAME, ITEM_DESC, ACTUL_COST, DIS_PER,SCHM_ID, ITEM_UNIT) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11)' USING v_sr_no, v_data_array(1), to_number(v_data_array(2)), v_data_array(3), v_data_array(4), v_data_array(5), v_data_array(6), to_number(v_data_array(7)), to_number(v_data_array(8)), to_number(v_data_array(9)), v_data_array(10); -- Clear out v_line := NULL; v_sr_no := v_sr_no + 1; END IF; END LOOP; END;
Create a new PLSQL process under Processes section in APEX and put this code under the same. Make the process run when upload button is pressed.
Data in excel sheet will get loaded into table.
Hope this helps !!
Reference:
http://oraexplorer.blogspot.com/2007/11/apex-to-upload-text-file-and-write-into.html
Hi ,
Your article is very interesting.
I’m sorry but I follow every point ….but it doesn’t work.
There is a problem about conversation to number.
Bye
Marco
Hi Marco,
Could you please let me know where exactly is the problem. I mean which step you got the error.
Regards,
Advait Deo
hi,
i am develope my page as your instruction but i have error in upload process : “table or view does not exists.”
Hi,
Could you please let me know the exact place where you got the error? Also please make sure you create hex_to_decimal function.
Regards,
Advait Deo
Hi,
I am trying with your code, no changes at all.
It is strange, however i am getting “ORA-01403: no data found”
I did follow exactly the same process as you told.
Please help.
I am trying with your code, no changes at all.
It is strange, however i am getting
“ORA-01403: no data found”
I did follow exactly the same process as you told.
Please Help
Hello Advait
I creatd the table as described in the post. Copied your code and followed same process. I am getting “ORA-01403: no data found” error after pressing upload button. This is what is the debug from APEX looks like
Session State: Save form items and p_arg_values
0.03: …Session State: Saved Item “P50_UPLOAD” New Value=”F23169/santest.xls”
0.03: Processing point: ON_SUBMIT_BEFORE_COMPUTATION
0.03: Branch point: BEFORE_COMPUTATION
0.03: Computation point: AFTER_SUBMIT
0.03: Tabs: Perform Branching for Tab Requests
0.03: Branch point: BEFORE_VALIDATION
0.03: Perform validations:
0.03: Branch point: BEFORE_PROCESSING
0.03: Processing point: AFTER_SUBMIT
0.03: …Process “Data_to_table”: PLSQL (AFTER_SUBMIT) DECLARE v_blob_data BLOB; v_blob_len NUMBER; v_position NUMBER; v_raw_chunk RAW(10000); v_char CHAR(1); c_chunk_len number := 1; v_line VARCHAR2 (32767) := NUL
0.04: Encountered unhandled exception in process type PLSQL
0.04: Show ERROR page…
0.04: Performing rollback…
Please suggest.
Hi,
When we upload an excel sheet, a random name will get generated for the file in the form FXXXXX/. In my code I am getting this name dynamically using the following query.
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
Instead in your case, try to find out the name from wwv_flow_files table and use directly in your code. Once it works then use this query to get the name dynamically.
You can write a “After submit” code to select a row from wwv_flow_files table and insert into another temp table to get the name of file it uses.
Also in your case the problem looks to be in the code. I need to look into your “After submit” PLSQL procedure that you wrote to process uploading of you excel sheet. Please let me know.
Regards,
Advait Deo
Hi Advait Deo
Thanks for the response. I am using exactly the same code which you pasted here. No changes at all.(The after submit code is same, even the table structure is same)
i have not changed any part in your query.
Can you guide what might be wrong. ( I followed all the process which you documented here)
Appreciate your support.
Thank you so much.
Sandeep
Hi
I could figure out the error,the file was not saved as .csv file. It is working fine now. one question though, ig the first row contains column names, how will i separate the data while inserting into table. i mean we don’t want column names to be inserted into table. is there any way to avoid reading first row of data while inserting?
Thanks
Sandeep
Hi,
I have tried your code but its not working, can you please check what might be the problem. Below is my source code. The CSV file is loaded but not able to insert the values into the sample_tbl.
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len NUMBER := 1;
v_line VARCHAR2 (32767):= NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows NUMBER;
v_sr_no NUMBER := 1;
BEGIN
delete from sample_tbl;
— Read data from wwv_flow_files
select blob_content into v_blob_data from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
WHILE ( v_position <= v_blob_len ) LOOP
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into sample_tbl(col1,….col11)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11)’
USING
v_sr_no,
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
v_data_array(7),
v_data_array(8),
to_date(v_data_array(9),’MM/DD/YYYY’),
v_data_array(10);
— Clear out
v_line := NULL;
v_sr_no := v_sr_no + 1;
END LOOP;
END IF;
END;
Hi,
great to know its working now. In my case I simply deleted the header row and then uploaded the CSV. Thats the easiest way to do. Else if you want to do it in you code and ignore first row, then you might make changes in your code accordingly.
Example: You can copy the BLOB to char conversion code from above LOOP and place it before LOOP once so that it will process first row and then enter into the loop where it will process the second row and insert it.
Following code can be placed just before while LOOP
# v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
# v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
# v_line := v_line || v_char;
# v_position := v_position + c_chunk_len;
# — When a whole line is retrieved
# IF v_char = CHR(10) THEN
# — Convert comma to : to use wwv_flow_utilities
# v_line := REPLACE (v_line, ‘,’, ‘:’);
# — Convert each column separated by : into array of data
# v_data_array := wwv_flow_utilities.string_to_table (v_line);
Once entered into LOOP after this code, processing will start from second row and insert will insert values starting from second row.
Hope this helps !!
Regards,
Advait Deo
Hi,
I have tried your code but in my case its not working, can you please check what might be the problem. Below is my source code. The CSV file is loaded but not able to insert the values into the sample_tbl.
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len NUMBER := 1;
v_line VARCHAR2 (32767):= NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows NUMBER;
v_sr_no NUMBER := 1;
BEGIN
delete from sample_tbl;
– Read data from wwv_flow_files
select blob_content into v_blob_data from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
– Read and convert binary to char
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
– When a whole line is retrieved
IF v_char = CHR(10) THEN
– Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
– Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
WHILE ( v_position <= v_blob_len ) LOOP
– Insert data into target table
EXECUTE IMMEDIATE ‘insert into sample_tbl(col1,….col11)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11)’
USING
v_sr_no,
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
v_data_array(7),
v_data_array(8),
to_date(v_data_array(9),’MM/DD/YYYY’),
v_data_array(10);
– Clear out
v_line := NULL;
v_sr_no := v_sr_no + 1;
END LOOP;
END IF;
END;
Hi.,
Did you get the solution for this.?
I am also having the same issue..
The above code didn’t work for me. But the following code can be placed before the actual while loop.
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(EASY_RUN.hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
WHILE (v_char != CHR(10)) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(EASY_RUN.hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
END LOOP;
v_line:=null;
Hi Deo,
When I am trying to use this below code for removing first row , I am facing
ORA-06502: PL/SQL: numeric or value error: character string buffer too small error
If we remove the block which you mentioned to remove first row , the process is working fine.
Please suggest.
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
delete from data_upld;
select blob_content into v_blob_data
from wwv_flow_files
where filename =’df.csv’;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
IF v_char = CHR(10) THEN
v_line := REPLACE (v_line, ‘,’, ‘:’);
v_data_array := wwv_flow_utilities.string_to_table (v_line);
v_line := NULL;
END IF;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into DATA_UPLD (SR_NO, CODE_ID, CODE_CTGY, CTGY_NAME, CTGY_DESC, ITEM_NAME, ITEM_DESC, ACTUL_COST, DIS_PER,SCHM_ID, ITEM_UNIT)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11)’
USING
v_sr_no,
v_data_array(1),
to_number(v_data_array(2)),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
to_number(v_data_array(7)),
to_number(v_data_array(8)),
to_number(v_data_array(9)),
v_data_array(10);
— Clear out
v_line := NULL;
v_sr_no := v_sr_no + 1;
END IF;
END LOOP;
END;
This is an excellant post.
Anyone who follows this post fully and properly will have no problems getting this to work, provided the excel file is saved as .csv and not .xls
If you are experiencing problems, it’s because you have not followed the instructions properly.
how to get table name from wwv_flow_files table
Hi Advait,
You have put the excellent article… I followed the steps and was running successfully. m now able to upload a excel file into databse…
I have one query regarding it, when i am uploading the file second time it is overwriting the previous data.. How we cud APPEND the data instead of overwriting the data in the database.
Appreciate your support. Thank a lot.
Himanshu Jalota
Hey Advait .. this is an excellent article ..
Was very helpful for one of my similar projects.
Boss, Well done !
Thanks much.
Suresh.
hey advait,
I got my file uploaded successfully and is now also adding records instead of deleting it….
i have one problem….
i want the details of the file..i mean the date and time when it was last uploaded in my GUI…
Could u please help me regarding it ?
Hi Himanshu
Just wanted to know how to implement the append records?
hi
thanks
it really help me in my work
Yor article is very good. It helped me solving my problem. If you save xls file as csv file the procedure works perfectly.
Thanks
Hi
This code worked for me as well. But there are two issue.
1. If there are any nulls inside excel, it raises exception ORA-01403: no data found for
v_data_array := wwv_flow_utilities.string_to_table (v_line);
Obviously the workaround is to write my own procedure. Is there better answer?
2. If there are : inside the data, will this work because it is replacing , with :
Is there any way to do the same for tab delimited file?
Thanks
Shree
Hi Advait
Thanks for the article, very useful and easy to set up considering i’ve only been playing around with apex for some hours. I’m planning on using it for some data conversion and it’s much user-friendly than the external table use provided as a sample at apex site sample code.
hi
i used your function in my csv file there are only two columns
when i am loading file it loads in the temporary table but in last column it adds two spaces extra even i am using trim function i am not able compare this string with other column can you pls suggest any answer i am really stuck in my work due to this
Hi,
Avadit
Thank you very much for posting this great article well done boss, I followed your given steps and its working smoothly, thanks again.
B/rdg
Asif Bin Qadir
Hi
It works like charm!!!!
The upload is fast too, thats the thing I like.
But can we modify this such a way that it will support excel & multibyte characters.
As of now csv is not supporting multibyte characters.
Anywork around for the same.
Thanks & Regards,
Benseer
Did anyone ever reply to your email about making the mutibyte characters work with this solution?
Instead of the hex_to_decimal() function, you may want to use the standard TO_NUMBER() function, provided by Oracle:
SQL> select to_number(‘FF’, ‘xxxxxxxxx’) from dual;
TO_NUMBER(‘FF’,’XXXXXXXXX’)
—————————
255
I assume this should be way faster if you try loading large files.
Hello,
I have a question. I have tried the code above for a .csv file and this works fine. But is it possible to do the same for a tab delimited .txt file? Can this also be done? And if so, how?
Thanks in advance.
Hello bram,
So far, you have probably found the solution to upload a tab delimited file (.txt).
Having not seen any solution posted here, I want to explain how you can do it.
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
–Here, before concatenating the v_char with v_line, you add a code to handle the tabulation
— for more details, see the a reference site http://www.asciitable.com/
IF v_char = chr(9) THEN
v_char :=’:’;
END;
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
– When a whole line is retrieved
IF v_char = CHR(10) THEN
– Convert comma to : to use wwv_flow_utilities
— you don’t need anymore the following code, then put is as comment
— v_line := REPLACE (v_line, ‘,’, ‘:’);
I need the oposite thing… import into excel a report from application express… wich should looks goog :-S with header for example… its this posible?
Hi,
Im facing problem while uploading file which contain mutlibyte characters. So when converting back its gets converted to some funny cahractors.
I changed the code to :
==========
Declare section >
v_char nCHAR(1);
v_line nVARCHAR2 (32767) := NULL;
inside code >
v_char := nchr(hex_to_decimal(rawtohex(v_raw_chunk)));
=================
But still im not getting the expected result.
Please advice
I am using exactly the same code which you pasted here. No changes at all.(The after submit code is same, even the table structure is same)
.Its not working with .xls files but it is working with .csv file. Kindly let me know if any one upload .xls file using this code. When we upload .xls file then we don’t need to replace ‘,’ to ‘:’. v_line := REPLACE (v_line, ‘,’, ‘:’)
I get an error “ORA 01722 Invalid number” when i try to run the script.
My .csv file contains the following data:
AL4H;9901;137876,66;485270,69
AL4H;9901;137876,55;485270,69
AL4H;9901;137876,34;485270,69
The first colum is an char, the other three are numbers in the destination table (boringen). The colum seperator = ‘;’.
I’ve run the following script:
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char char(1);
v_chunk_len NUMBER := 1;
v_line VARCHAR2(32767);
v_data_array wwv_flow_global.vc_arr2;
v_rows NUMBER;
v_sr_no NUMBER;
BEGIN
DELETE from BORINGEN where boring > 9000;
— Lees de data uit UPLOAD_EXCEL
SELECT BLOB_DATA into v_blob_data from UPLOAD_EXCEL
WHERE BLOB_ID = (SELECT MAX(BLOB_ID) from UPLOAD_EXCEL);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Converteren van binair naar char
WHILE (v_position <= v_blob_len) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data, v_chunk_len, v_position);
v_char := nchr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + v_chunk_len;
IF v_char = chr(10) THEN
v_line := REPLACE(v_line, ‘;’, ‘:’);
v_data_array := wwv_flow_utilities.string_to_table(v_line);
— INSERT DATA TOT BORING
insert into boringen (PROJECTCODE, BORING, X_COORD, Y_COORD)
VALUES (v_data_array(1), to_number(v_data_array(2)), to_number(v_data_array(3)), to_number(v_data_array(4)));
v_line := NULL;
v_sr_no := v_sr_no+1;
END IF;
END lOOP;
Commit;
END;
When i try to place a dummy value like this:
v_data_array := wwv_flow_utilities.string_to_table(‘TEST:9001:345345,44:345345,66);
then it works fine; i get three times this record in the table boringen.
Who can help me with this error? It like a wrong in filling the v_line with data?
Thanks!
REPLACE(V_DATA_ARRAY(3), ‘,’, ‘.’)
VALUES (v_data_array(1), to_number(v_data_array(2)), REPLACE(V_DATA_ARRAY(3), ‘,’, ‘.’)), to_number(v_data_array(4)));
Hi,
This works if your file is comma seperated.
Is there any way to read/insert data from an Excel Sheet ( not a .csv )
Thanks
Amol
It works for me. Thanks
But I need upload one excel file with 12 different worksheet.
As of now csv is not supporting multibyte characters.
Can you help me on it? if it is possible and How?
Thanks & Regards,
Cindy
Did you ever get any help on this request? I am having with multi-byte input also.
im getting this error when uploading excel file ORA-06502: PL/SQL: numeric or value error: character to number conversion error . can u help me out in solving this .. thanxs in advance
The proceedure as it is adds an end of line and carrige return character to the end of the last record on each line of the csv. So if your number is in the last column of a given line you have two invisible non-numeric characters being stored in it.
The easiest solution is to add the following statement immediatly before “v_line := REPLACE (v_line, ‘,’, ‘:’);”:
v_line := substr(v_line, 1, length(v_line)-2)
This will remove the two hidden characters from the end of the last element in the line.
This should also be done anytime you need to use the last element in a csv in a comparison as the hidden characters will probably not appear in any other source.
Hey. I have added in “v_line := substr(v_line, 1, length(v_line)-2);” so it now looks like this
v_line := substr(v_line, 1, length(v_line)-2)
v_line := REPLACE (v_line, ‘,’, ‘:’);
However I still get ORA-06502: PL/SQL: numeric or value error: character to number conversion error.
Any ideas?
Thanks, this method works well for small files, however I found it has some shortfalls.
– For large files it takes too long because of the chr(hex_to_decimal()) this can be avoided by using dbms_lob.converttoclob() to convert the entire blob VERY quickly
– Doesn’t cope with quoted fields
– Doesn’t cope with fields out of order, or additional fields, which users are all to prone to do
After some mucking around here is a modified version which runs faster, handles quotes, and fields in any order as long as the first record contains the field names:
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_clob_data CLOB := ‘anything’; –setting it to anything initialises it for use
v_clob_len NUMBER;
dest_offset NUMBER := 1;
src_offset NUMBER := 1;
blob_csid NUMBER := dbms_lob.default_csid;
lang_ctx INTEGER := dbms_lob.default_lang_ctx;
warning INTEGER;
v_position NUMBER;
v_char varCHAR2(1);
c_chunk_len number := 1;
v_field VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_InQuotes number := 0; — 1 when inside quotes
v_LastChar char(1); — previous character encountered
v_FieldNum number; — current field number
v_Row number := 0; — number of rows analysed
v_Field_CP_NUMBER number := 1;
v_Field_CP_TITLE number := 1;
v_Field_CP_WORK_CAT number := 1;
v_Field_CP_STATUS number := 1;
v_Field_CP_CONSTRUCTOR number := 1;
v_Field_CP_START_DATE number := 1;
v_Field_CP_END_DATE number := 1;
v_Field_CP_CONTRACT_MANAGER number := 1;
BEGIN
delete from CP_NDSD_IPAC_INPUT;
— Read data from wwv_flow_files ————————————
select
blob_content into v_blob_data
from
wwv_flow_files
where
name = :P151_CP_LOAD;
v_blob_len := dbms_lob.getlength(v_blob_data);
—————– Convert data from binary to character —————————————
dbms_lob.converttoclob(v_clob_data, v_blob_data, v_blob_len, dest_offset,src_offset,blob_csid,lang_ctx,warning);
v_clob_len := dbms_lob.getlength(v_clob_data);
—————– convert data into fields —————————————————–
v_position := 1;
v_data_array(1) := NULL; — used for missing columns
v_FieldNum := 2;
WHILE ( v_position 1) loop — for each field in the first record
v_Field := Upper(v_data_array(v_FieldNum)); — convert the field name to capitals
v_Field := Trim(v_Field);
v_Field := replace(v_Field, ‘ ‘, ‘_’); — replace spaces with underscore
case (v_Field) — check which fields are in which column
when ‘CP_NUMBER’ then v_Field_CP_NUMBER := v_FieldNum;
when ‘CP_TITLE’ then v_Field_CP_TITLE := v_FieldNum;
when ‘CP_WORK_CAT’ then v_Field_CP_WORK_CAT := v_FieldNum;
when ‘CP_STATUS’ then v_Field_CP_STATUS := v_FieldNum;
when ‘CP_CONSTRUCTOR’ then v_Field_CP_CONSTRUCTOR := v_FieldNum;
when ‘CP_START_DATE’ then v_Field_CP_START_DATE := v_FieldNum;
when ‘CP_END_DATE’ then v_Field_CP_END_DATE := v_FieldNum;
when ‘CP_CONTRACT_MANAGER’ then v_Field_CP_CONTRACT_MANAGER := v_FieldNum;
else v_Field := v_Field; –had to do something here to avoid errors
end case;
v_FieldNum := v_FieldNum – 1;
end loop;
elsif (v_Row > 1) then
— Insert data into target table
EXECUTE IMMEDIATE
‘insert into CP_NDSD_IPAC_INPUT
(
CP_NUMBER,
CP_TITLE,
CP_WORK_CAT,
CP_STATUS,
CP_CONSTRUCTOR,
CP_START_DATE,
CP_END_DATE,
CP_CONTRACT_MANAGER
)
values
(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)’
USING
v_data_array(v_Field_CP_NUMBER),
v_data_array(v_Field_CP_TITLE),
v_data_array(v_Field_CP_WORK_CAT),
v_data_array(v_Field_CP_STATUS),
v_data_array(v_Field_CP_CONSTRUCTOR),
to_date(v_data_array(v_Field_CP_START_DATE), ‘DD/MM/YYYY’),
to_date(v_data_array(v_Field_CP_END_DATE), ‘DD/MM/YYYY’),
v_data_array(v_Field_CP_CONTRACT_MANAGER);
end if;
v_Field := ”; — clear the current field
v_FieldNum := 2; — reset the field number
elsif (v_char chr(13)) then — normal character (ignore LF character from MSDOS format files)
v_field := v_field || v_char; — add the character to the field
end if;
end loop;
apex_application.g_print_success_message := ‘CP .csv file loaded – Row Count ‘ || to_char(v_Row);
END;
Dave,
Your example for large files is interesting.
However, it looks to me that this is not the complete code. Could you supply the complete code?
Kind Regards,
Maurice
Maurice,
I can’t see anything missing.
The smiley face should have been a colon P at the beginning of my item name, but the site altered it.
What have I neglected to explain?
Dave.
I found the solution to my issue. On Line 015: v_field VARCHAR2 (32767) := NULL;
Change to: v_vield CLOB :=NULL;
With the changes noted in this thread, I am now able to load any size file without error, and parsing of Comments fields is managed perfectly.
Great discussion!!
Dave,
Your code looks promising, but some statements are missing or came out wrong. The WHILE statement condition just shows (v_position 1); you have ‘end if’s with no if statements, you have a second ‘end loop’, but only one loop statement.
My bad, apparently some of my code was interpreted as special characters.
Hopefully the code below will appear OK:
Good stuff Dave, thanks!
It doesn’t look like you’re setting v_LastChar. I think it should be set to v_char before the END LOOP statement;
I have implemented the code as posted by Dave, and added the extra tag by JonL to handle the comma’s, however I’m still getting the dreaded ORA-06502: PL/SQL: numeric or value error.
I’ve even tried to add the v_line code discussed in the thread, but to no avail. Here’s what I have (starting with Dave’s code):
[1] Added after line #135 in Dave’s:
end if;
v_LastChar := v_char;
end loop;
[2] Added to the DECLARE statements:
v_line VARCHAR2 (32767) := NULL;
[3] Added after line #53 in Dave’s:
v_line := v_line || v_char;
[4] Added after line #70 in Dave’s:
v_line := REPLACE (v_line, ‘;’, ‘:’);
Note: I’ve tried four different iterations of this code change:
[4.1] v_line := REPLACE (v_line, ‘;’, ‘:’);
[4.2] v_line := REPLACE (v_line, ‘,’, ‘:’);
[4.3] v_line := REPLACE(v_line, CHR(10),’:’);
v_line := REPLACE (v_line, ‘,’, ‘:’);
[4.4] v_line := REPLACE(v_line, CHR(10),’:’);
v_line := REPLACE (v_line, ‘;’, ‘:’);
Same results for all iterations = ORA-06502: PL/SQL: numeric or value error
What am I doing wrong? Please help.
I ended up making two additional changes to Dave’s code that solved the numeric value error:
– in the DECLARE section:
replace: v_blob_len NUMBER;
with: v_blob_len NUMBER:= DBMS_LOB.LOBMAXSIZE
– just before line 046 in Dave’s code insert:
DBMS_LOB.CREATETEMPORARY(v_clob_data, TRUE);
With the changes mentioned before and using Dave’s code, I am now loading the csv perfectly every time, and comma’s within a text cell are being handled perfectly.
I may have spoken too soon, seems that now I am getting an error:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
The error seems to be coming from a Comments field in the data import. If I reduce the comments being imported, the error goes away. Suggestions??
How about to upload an XML file? 🙂
I use .csv file but i have ORA-01403: no data found error.
I use toad in order to debug the process and this select give me ‘no rows returned’… why??? I try also without the where clause but the table is empty…Pls help me
select blob_content
into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY= :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by= :APP_USER);
I also couldn’t get anywhere searching for updated_by=:APP_USER,
but found that changing the condition to name = : P151_CP_LOAD which was my item containing the desired filename, not only worked, but also allowed me to load multiple files with a single page submit.
Hi,
Three small remarks:
1. What’s with the execute immediate? Why no normal insert?
2. My excel uses semicolons for separation, which at first I didn’t see, as a result of this the string_to_table didn’t work as expected, resulting in “no data found” when accessing v_data_array(2)…
3. apex_util.string_to_table is probably better than wwv_flow_utilities.string_to_table
First of all thanks a lot Advait!
I had the same “no data found” problem. It was due (as posted by HJHorst few days ago) to the fact that when saving .csv format it puts “;” instead of “,” so the script didn’t work propertly.
Now everything is ok!
Bye,
Enrico
Hi ,
Your article is very interesting.
Thank’s n regards
varun vashisht
There’s good info here. I did a search on the topic and found most people will agree with your blog. Keep up the good work mate!
I’m Out! 🙂
Hi,
can this tools is used with dblink
i assume like this
“…. 1nsert 1nto DATA_UPLD@DBLINK_NAME …”
Thx Advait for this post… It works great…One question remains to be unresolved though …. The Excel Upload. How can we directly upload from an Excel workbook? Any ideas..
Great piece of code, worked first time for me.
Next step is to make a generic version that you pass the TABLE_NAME to and read the COLUMN_NAMEs from the header row.
Peter.
Hello,
I am trying to import data into database with this link
https://avdeo.com/2008/05/21/uploading-excel-sheet-using-oracle-application-express-apex/
but the condition used IF v_char = CHR(10) THEN
is not clear. from where are we getting the number 10.
as when I am trying the same for column 11th and increasing the CHR from 10 to 11 its not working for me.
Kindly answer.
Thanks
CHR(10) is the ASCII character that appears at the end of every line in a text file.
It has nothing to do with the number of columns.
The code I provided will work with any number of columns, the columns that aren’t recognised or aren’t of interest are ignored.
Hi Advait,
I followed your steps well but keep getting following errors.
ORA-06502: PL/SQL: numeric or value error: character string buffer too small.
I found previous records are not deleted, the procedure seems not even start. Please advice, thanks.
Julian
Hi Excellent Posting,
It works fine.When there are no null values in the csv file but can anyone suggests how can we handle csv file which is having null values
Regards
Vardman
Hi
Getting ORA-01403: no data found when csv file is having null values.
Regards,
Vardman Jain
Wonderful post!
Hi,
though i dont have any nulls in my csv still i get “No Data Found Error”.The following statement is causing problem.
v_data_array := wwv_flow_utilities.string_to_table (v_line);
Anybody knows any workaround.
Regards,
K.Chaplin
Error: data not found
Solution: replace
v_line := REPLACE (v_line, ‘,’, ‘:’);
on
v_line := REPLACE (v_line, ‘;’, ‘:’);
(for *.csv)
the problem was with replace() only.i checked line by line and corrected it.Now its working perfect.
thanks for the reply.
im getting this error when uploading excel file ORA-06502: PL/SQL: numeric or value error: character to number conversion error . can u help me out in solving this .. thanxs in advance
hi denis am also getting the sam eerror..did u resolve ur issue with uploading can u plz tell me if possible
am using ur above code for my project but it is giving an error like No Data found..i think there is an error with delimiter..but not able to resolve it
Hi,
I have used ur above code. I am geeting an error when i am uploading the CSV file ‘ORA-01008: not all variables bound’
Please help
Thanks
It sounds like you have more values in the (:1, :2, … list than the USING clause that follows it.
HI,
Thanks, I’m able to upload the excel file successfully to APEX. But there is a problem, the excel document which I’m uploading is having characters in German, French and Spanish including English. Apart from English characters all other languages characters are getting ignored after uploading. Can you please help me how to overcome this issue. We have build our APEX application and it’s on hold to go live because of this issue. I would really appreciate if I get a solution to solve this issue.
– Thanks
Hi has anyone tried using this with apex_collection?
– Thanks
Thanks! I got a variation of Dave’s February 10, 2010 post working.
A related issue is that I want the File Browse (from step 1) to only show CSV files by default (to make it less likely that a user will select an XLS file).
Is there an easy way to do this?
Excellent sample! Worked like a charm.
Excellent post! Really helped me! Plz follow all of the instructions peroperly. Advait Deo made data uploading very easy through his post.
I also had the issue of the two additional characters causing a numeric conversion issue since the last column was a number. Adding the line v_line := substr(v_line, 1, length(v_line)-2); before “v_line := REPLACE (v_line, ‘,’, ‘:’);”; did the trick!!! Thank you!
Thanks Advait for the excellent example. Instead of using the execute immediate .. and then using clause, I just used a simple insert statement with values. 🙂
I added this to the declare section of the process:
l_file APEX_APPLICATION_FILES.name%type := :P35_UPLOAD;
and changed the select to
SELECT blob_content
INTO v_blob_data
FROM wwv_flow_files
WHERE name = l_file;
I think the file content parsing can be made easier:
Here is the idea (assuming .csv file is uploaded and converted to V_CLOB_DATA, ORACLE version can use REGEXP_SUBSTR function):
L_ROW_COUNT NUMBER;
L_ROW VARCHAR2(4096);
L_FIELD_NAMES_COUNT NUMBER;
L_FIELD_COUNT NUMBER;
L_FIELD VARCHAR2(1024);
L_ROW_COUNT := LENGTH(V_CLOB_DATA) – LENGTH(REPLACE(V_CLOB_DATA, CHR(10))) + 1;
FOR I IN 1 .. L_ROW_COUNT LOOP
L_ROW := REGEXP_SUBSTR(V_CLOB_DATA,'[^’ || CHR(10) || ‘]+’,1,I);
L_FIELD_COUNT := LENGTH(L_ROW) – LENGTH(REPLACE(L_ROW, ‘,’)) + 1;
/**/
FOR J IN 1 .. L_FIELD_COUNT LOOP
L_FIELD := REGEXP_SUBSTR(L_ROW,'[^,]+’,1,J);
end loop;
end loop;
replace /**/ with this to catch inconsistent column count error
IF I = 1 THEN — ensure consistent number of columns
L_FIELD_NAMES_COUNT := L_FIELD_COUNT;
ELSE
IF L_FIELD_COUNT L_FIELD_NAMES_COUNT THEN
RETURN ‘Column count problem. Check for commas in the content of the fields.’;
end if;
end if;
REGEXP_COUNT can be used with Oracle 11g to determine the row and column count
Well, looks like I messed up the posting above. It will only work if there is no commas anywhere in the uploaded csv file.
So, the original approach is better.
This a little modifcation I made for FRANCE charset. It also deals with empty lines at the end of a csv file.
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_DATE1 DATE ;
V_VALEUR NUMBER ;
BEGIN
delete from evv_plval;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
— htp.p (‘v_blob_len ‘ || v_blob_len);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
— htp.P ('v_char '||v_char||' ('||hex_to_decimal(rawtohex(v_raw_chunk))||')');
IF hex_to_decimal(rawtohex(v_raw_chunk)) 10 AND
hex_to_decimal(rawtohex(v_raw_chunk)) 13 AND
hex_to_decimal(rawtohex(v_raw_chunk)) IS NOT NULL
THEN
v_line := v_line || v_char;
END IF;
v_position := v_position + c_chunk_len;
— htp.p (‘ v_position ‘ || v_position || ‘ v_blob_len ‘ || V_BLOB_LEN ||”);
— When a whole line is retrieved
IF v_char = CHR(10) OR (v_position=v_blob_len+1)
THEN
IF length(V_LINE) > 0
— Convert comma to : to use wwv_flow_utilities
THEN
— htp.p (‘v_line : ‘ || v_line || ‘ (‘||length(v_line)||’)’||”) ;
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := apex_util.string_to_table (v_line);
V_DATE1 := TO_DATE(v_data_array(2), ‘DD-MM-YYYY HH24:MI:SS’);
V_VALEUR := TO_NUMBER(REPLACE(v_data_array(3), ‘.’,’,’));
EXECUTE IMMEDIATE ‘insert into EVV_PLVAL (CLEF_VAR, DATE1, VALEUR)values (:1,:2,:3)’
USING v_data_array(1),
V_DATE1,
V_VALEUR;
— Clear out
v_line := NULL;
END IF;
END IF;
END LOOP;
END;
Hi,
I am able to upload data for VARCHAR2 columns but I am getting error ‘ORA-06502: PL/SQL: numeric or value error’ while uploading with data for numeric columns.
Please advice me where I am wrong. FYI,I getting this error after using to_number function too.
Hi I have tried and tried this code
but never the less i still get no_data found.
Its specially when i add the execute immediate ‘insert into…’ that doesn’t work.
What can I possible be missing here?
Great code. Work I removed all the mistakes I introduced. Especially liked the code that accounted for 1st row being column names. I’m using this one.
Thanks
Hi Jim Can you please share the code to ignore/remove the first line of the template which is nothing but the column header ??
Your code looks very useful, unfortunately I don’t get it to work.
By doing some debugging I found that “v_line” gets values like : “PK” (I’m unable to copy/paste them, very strange characters)
The hex_to_decimal function does exist and is still being called :
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
insert_log(2, v_line);
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
ln_lines := ln_lines +1;
insert_log(1, ln_lines);
IF ln_lines > 2 — SKIP first 2 header rows
THEN
What could I possibly be missing? Thanks!
Hi Advait!!!
This article it’s great and it`s just what i need….if i can make works 😦
Mi problem it’s just at the begining … I followed the first 4 steps :
1- I created a “File Browse” item . In settings I selected storage Type : Table WWV_FLOW_FILES.
2- I created a button (Action Submit).
3- I created a branch conditions (Branch Point : On submit After Processing (After Computation,Validation, and Procesing))
4- In this step when you say “When we select an excel sheet using browse button and click on submit button, it will branch to same page (indirectly we are saying that its going to refresh the page). In such case even if there is no code written in the backend, APEX is going to load the excel sheet in a BLOB format into a table wwv_flow_files.”
When i press the button, i find in the wwv_flow_files tables in Oracle, but it`s empty!! So i didn’t able to “load” the file…
What i made wrong?? In the file browse item you selected and excel file?? or csv file??
Thanks a lot!!!
J.
Hi Advait!!!
This article it’s great and it`s just what i need….if i can make works 😦
Mi problem it’s just at the begining … I followed the firts 4 steps :
1- I created a “File Browse” item . In settings I selected storage Type : Table WWV_FLOW_FILES.
2- I created a button (Action Submit).
3- I created a branch conditions (Branch Point : On submit After Processing (After Computation,Validation, and Procesing))
4- In this step when you say “When we select an excel sheet using browse button and click on submit button, it will branch to same page
(indirectly we are saying that its going to refresh the page). In such case even if there is no code written in the backend,
APEX is going to load the excel sheet in a BLOB format into a table wwv_flow_files.”
When i press the button, i find in the wwv_flow_files tables in Oracle, but it`s empty!! So i didn’t able to “load” the file…
What i made wrong?? In the file browse you selected and excel file?? or csv file??
Thanks a lot!!!
J.
Hi
I followed all this step. I created a data_upld table like yours, i created a csv file like yours with delimited ‘,’. I made the page with file browse item, button,conditin branche and process with the code, i not made any chance and i have the ORA-01403: no data found.
I find in the wwv_flow_files on Oracle but it’s empty!!!!
What i made wrong??
Thanks any help,
J.
I have tried the above steps. But it doesn’t work for Excel. It will work only if the file uploaded is of csv. Please help
That is correct. You have to save the file in CSV format.
It seems to me that a lot of people know how to parse XLS format (it’s built into APEX, Toad, OpenOffice, etc.), but I haven’t found documentation about it.
Thanks to all who made understand the whole story.
It took me an hour to get it work according to my need.
Following are the errors i got time by time and its solution.
1: No File Error (Your file should be a xls/xlsx but saved as csv *NO COMMA)
2: ORA-06502: PL/SQL: numeric or value error: character to number conversion error (As Michelle Holman said
Adding the line v_line := substr(v_line, 1, length(v_line)-2);
before v_line := REPLACE (v_line, ‘,’, ‘:’);
Thanks again Specially ADVAIT, SANDEEP and Holman.
Thank you very much for this posting and I was able to upload .csv file without any prob.
But, when i try to upload .xls file I get the error message as “ORA-01403: no data found”.
Please do guide me in this, as I could not get any idea with the above mentioned workarounds.
Hey Shiva, Please read the whole story before you ask.
Okey here is the solution:
Complete your work on your xls and when your are done then SAVE AS csv. Hope you got it this time.
Thanks
Hi there
I was having the same trouble as everyone else so I tried using wwv_flow.debug to see what was actully happening. This showed the that the last cell of one line was joining to the first of the next line because the CH(10) is not being replaced by a ‘:’ to identify it as a seperate array
The solution is to add
v_line := REPLACE(v_line, CHR(10),’:’);
above
v_line := REPLACE (v_line, ‘,’, ‘:’);
Hope this helps
Thanks
Followed above directions and can get file to mostly upload. Having problems with comma being in the data. Example: Love, Honor & Marriage is in 1 column, however the code is breaking it out as Love in 1 column and Honor & Marriage into a separate column.
Michelle,
See Dave’s post dated February 10, 2010 at 12:36 am. It handles values with embedded commas if you make the change I suggested in my followup.
Good stuff and with Dave’s tweaks it mostly worked like charm.
Do change the ‘v_fieldnum := 2;’ to ‘v_fieldnum := 1;’ (Occurs on line 53 and 297) if you want to include the first field as well.
Many thanks,
Sophia
I think it was correct with initializing v_fieldnum := 2.
The variable v_data_array(1) holds NULL which is used when columns aren’t in the CSV file. The first column’s data is in v_data_array(2), etc.
With your change, the data from the first column would also be imported into any columns that aren’t in the CSV file.
Dave or anyone,
where should v_LastChar be set ?
buffer string to small comes from special characters that need char(2) instead of char(1)
I set just before the END LOOP statement (to v_char)
Correct, thanks JonL, to follow good programming practice, it could also be set before the WHILE statement to either null, or some harmless character like space, rather than assuming an initialised value.
Hi:
For my csv file, the last column is a date column, whose value is optional. If it is null, I like to load with sysdate+1.
here is my sample csv file:
KA-KET-HTC (2011),SPU0410LR5H-QB-2,0.385,,
KA-KET-HTC (2011),SPU0410LR5H-QB-T,0.385,,1-MAR-11
USING v_group_no,
v_sequence_no,
v_data_array(1), — price_list_name
v_data_array(2), — item
v_data_array(3), — new_item_price
v_data_array(4), — new_svr
NVL(TO_DATE(v_data_array(5), ‘DD-Mon-YY’), SYSDATE+1), — new_start_date
‘PENDING’,
SYSDATE,
99999,
SYSDATE,
9999;
When I run my test, I got the following error:
ORA-01847: day of month must be between 1 and last day of month
I am 200% sure (by testing different csv files) the error is due to the last column of first line for my sample csv is null.
Here is my complete source code:
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len NUMBER := 1;
v_line VARCHAR2(32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows NUMBER;
v_group_no NUMBER;
v_sequence_no NUMBER := 2;
BEGIN
DELETE FROM REPOSITORYAPEX.KE_PRICE_LOAD_INTERFACE;
SELECT apps.qp_list_headers_b_s.NEXTVAL INTO v_group_no FROM dual;
— Read data from wwv_flow_files
SELECT blob_content
INTO v_blob_data
FROM wwv_flow_files
WHERE last_updated = (SELECT MAX(last_updated)
FROM wwv_flow_files
WHERE UPDATED_BY = :APP_USER)
AND ID = (SELECT MAX(ID)
FROM wwv_flow_files
WHERE updated_by = :APP_USER);
v_blob_len := DBMS_LOB.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE (v_position <= v_blob_len)
LOOP
v_raw_chunk :=
DBMS_LOB.SUBSTR(v_blob_data,
c_chunk_len,
v_position);
v_char := CHR(apps.hex_to_decimal(RAWTOHEX(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line :=
REPLACE(v_line,
‘,’,
‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table(v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into REPOSITORYAPEX.KE_PRICE_LOAD_INTERFACE
(
load_group_id ,
load_sequence_id ,
price_list_name ,
item ,
new_item_price ,
new_svr ,
new_start_date ,
process_status ,
creation_date ,
created_by ,
last_updated_date ,
last_updated_by
)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12)’
USING v_group_no,
v_sequence_no,
v_data_array(1), — price_list_name
v_data_array(2), — item
v_data_array(3), — new_item_price
v_data_array(4), — new_svr
NVL(TO_DATE(v_data_array(5), ‘DD-Mon-YY’), SYSDATE+1), — new_start_date
‘PENDING’,
SYSDATE,
99999,
SYSDATE,
9999;
— Clear out
v_line := NULL;
v_sequence_no := v_sequence_no + 1;
END IF;
END LOOP;
END;
Any ideas?
Thanks in advance!
Kevin
I resolved it myself after several hours debugging…
I need replace wwv_flow_utilities.string_to_table function call with apex_util.string_to_table. There is a bug for wwv_flow_utilities.string_to_table when last column is null/blank. apex_util.string_to_table will handle it properly in such case.
Hi,
I can read xls file from the BLOB, below pl/sql reading xls file header getting junk charaters then data from the xls file
1) How to find where row/column data starts
Best Regars
Here is the output
v_blob_len:13824
————————-
v_line: >
(. @ \ p user Mullennex
B a = = x x ;T$8 X@
” 1 A r i a l 1 A
r i a l 1 A r i a l 1 A r i a l
“$”#:##0_);\(“$”#:##0\)! “$”#:##0_);[Red]\(“$”#:##0\)”
“$”#:##0.00_);\(“$”#:##0.00\)’ ” “$”#:##0.00_);[Red]\(“$”#:##0.00\)7 * 2
_(“$”* #:##0_);_(“$”* \(#:##0\);_(“$”* “-“_);_(@_). ) ) _(* #:##0_);_(*
\(#:##0\);_(* “-“_);_(@_)? : : _(“$”* #:##0.00_);_(“$”* \(#:##0.00\);_(“$”*
“-“??_);_(@_)6 + 1 _(* #:##0.00_);_(* \(#:##0.00\);_(* “-“??_);_(@_)
+
) : *
` * Sheet1
Mercury VenusSheEarth 8 t
Life! Mars Water! Jupiter Big! Saturn Rings! eod Hot! Bright!
O
————————-
————————-
PL/sql
=====
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
My_File_name Varchar2(200);
BEGIN
select name,BLOB_CONTENT INTO My_File_name,v_blob_data
from OEHR_FILE_SUBJECT
where id= (select max(id) from OEHR_FILE_SUBJECT);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
DBMS_OUTPUT.Put_LINE(‘v_blob_len:’||To_Char(v_blob_len));
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
–v_line := REPLACE (v_line, ‘,’, ‘:’);
–v_line := REPLACE (v_line, ‘,’, ‘:’);
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
–v_data_array := wwv_flow_utilities.string_to_table (v_line);
v_data_array := apex_util.string_to_table (v_line);
–DBMS_OUTPUT.Put_LINE(‘ v_data_array:’||v_data_array(v_line));
— Insert data into target table
–EXECUTE IMMEDIATE ‘insert into PLANETS (PID, NAME, FEATURES) values (:1,:2,:3)’ USING V_data_array(1),v_data_array(2), v_data_array(3);
–DBMS_OUTPUT.Put_LINE(V_data_array(1) );
–DBMS_OUTPUT.Put_LINE(v_data_array(2) );
–DBMS_OUTPUT.Put_LINE(v_data_array(4) );
DBMS_OUTPUT.Put_LINE(‘————————-‘);
DBMS_OUTPUT.Put_LINE(‘v_line:’||v_line);
DBMS_OUTPUT.Put_LINE(‘————————-‘);
— Clear out
v_line := NULL;
v_sr_no := v_sr_no + 1;
END IF;
End Loop;
End;
Hi ,
I have tried all mentioned changes in post , but still I’m getting ‘ORA-06502: PL/SQL: numeric or value error: character to number conversion error’
Can any body please help me regarding this error
code :
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
delete from test_upload;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := substr(v_line, 1, length(v_line)-2);
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into test_upload (RUN_DATE, RUN_ID, SEQ_ID, ITEM, QTR, MONTH, F_QTY, B_QTY, S_QTY)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9)’
USING
SYSDATE,
1,
to_number(v_data_array(1)),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
to_number(v_data_array(6)),
to_number(v_data_array(7)),
to_number(v_data_array(8));
— Clear out
v_line := NULL;
END IF;
END LOOP;
END;
Srinivas,
This error is typically caused by a problem with your data, not your code.
You have 4 columns where you use the to_number() function, have any of these columns got non-numeric values in them. Maybe even the column headings, as you don’t appear to be discarding the first row?
Srinivas,
r u reading xls file to csv file ??
Raj
Srinivas,
r u reading xls file or csv file ??
Raj
Dear All,
Pls let me know if any one able to upoload xls file data into the table??? I thik when we try to read xls file first get Header of the xls file then only we get the real data right ? So pls some one tell me that how to skip the xls file hear and get only exel data only.
Best Regards
Raj
All,
Has any one loaded using xls file, not .csv. Since blobs store xls files, I need to read xls and then load to a table. Please share the steps. Appreciate your time. Thanks, Phani
Hi Advait,
I noticed in your example code, to query to uploaded file from wwv_flow_files, you do this:
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
Would it be possible for you to update that code snippet to just select the file based on the unique ‘name’ column (as has been discussed in the comments), like this:
select blob_content into v_blob_data
from wwv_flow_files
where name = :P1_FILE_BROWSE_ITEM
Apart from being simpler, faster and the recommended way of doing this, the reason I’m asking this code to be updated is because making the assumption that the most recently uploaded file will have the max(id) is risky, because when you upgrade your Application Express version, this may no longer be the case. Upon testing the 4.1 upgrade internally, we had applications where this code was used, and their uploads no longer worked throwing ‘no data found’ exceptions. The fix being switching to using a predicate on the ‘name’ column.
Many thanks,
Anthony.
Ok, so I got caught by the smiley brigade there, the code snippet should say:
…
where name = [colon]P1_FILE_BROWSE_ITEM
Anthony
Nice Post..helped me in uploading csv file In APEX…But I am getting No data found Error while trying to upload a csv file with null values in some columns.Can anybody suggest a solution for the same?
Hi all
Thanks for Lyn and you everyone.
Lynn March 16, 2011 at 12:23 pm say worked for me like below:
The solution is to add
v_line := REPLACE(v_line, CHR(10),’:’);
above
v_line := REPLACE (v_line, ‘,’, ‘:’);
and i found another issue seems to be a bug which i cannot get any content in wwv_flow_files table always.
instead i use the below select statement to get BLOB:
—————————————————
select blob_content into v_blob_data
from wwv_flow_file_objects$
where id=(select max(id) from wwv_flow_file_objects$);
—————————————————-
This works for me to upload CSV file to table
Thanks all
saven
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
please upload a video for the same
I’m new to Oracle and APEX and i am trying to import a csv file into a table. I have a test database and application and a live database and application. Both are apex 4 and oracle 11g. I have followed the brilliant instructions above on test and they work exactly as planned and i have been able to import with no worries. unfortunately on the live system i get error ORA-01403: no data found when i press the submit button. I have compared both applications side by side and all buttons, processes etc.. are identical.
Am i missing something? at what point does the item get uploaded to wwv_flow_file because i have ran select * from wwv_flow_file at various stages of the process and it always returns no data.
Any assistance on this matter would be appreciated.
Thank you
Hi Mark,
To be honest, that happens to me regularly. However, by insisting, repeating, that finally works. sometimes, that fails 4 times and the 5th that works. I don’t why.
I would like to resolve this imperfection before releasing in production.
If this can help you
Bruno
Thanks BrunoBr, still not found a solution but will let you know if i do. Maybe it’s some priveleges i haven’t set but if it is i haven’t a clue which ones.
Hello, i am getting an error while parsing the above pl/sql query :
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
I have also included the query here please hep me in solving the same
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
function HEX_TO_DECIMAL (p_hex_str in VARCHAR2)
return number
is
v_dec number;
v_hex varchar2(16) := ‘0123456789ABCDEF’;
begin
v_dec := 0;
for indx in 1 .. length(p_hex_str)
loop
v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1;
end loop;
return v_dec;
end hex_to_decimal;
BEGIN
delete from WC_WEBINAR_ATTENDANCE_DS;
Select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
IF v_char = CHR(10) THEN
v_line := REPLACE (v_line, ‘,’, ‘:’);
v_data_array := wwv_flow_utilities.string_to_table (v_line);
EXECUTE IMMEDIATE ‘insert into WC_WEBINAR_ATTENDANCE_DS (PARTICIPANT_NUM,PARTICIPANT_NAME,PARTICIPANT_FNAME,PARTICIPANT_LNAME,PARTICIPANT_EMAIL,INVITED,REGISTERED,REGISTRATION_DT,ATTENDED,ATTENDANCE_DT,START_TM,END_TM,DUR,ATTENTION_DUR_RATIO,ATTENTION_TO_ATTENDANCE_RATIO,
TELECONFERENCE,COMPANY,TITLE,PHONE_NUM,ADDRESS_1,ADDRESS_2,CITY,STATE_PROVINCE,COUNTRY_REGION,ZIP_POSTAL_CODE,CONTACT_METHOD,CONTACT_METHOD_OTHER,PRESENTER_NAME,PRESENTATION_TM)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29)’
USING
to_number(v_data_array(1)),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
v_data_array(7),
to_date(v_data_array(8),MM/DD/YYYY) ,
v_data_array(9),
to_date(v_data_array(10),MM/DD/YYYY),
v_data_array(11),
v_data_array(12),
to_number(v_data_array(13)),
to_number(v_data_array(14)),
to_number(v_data_array(15)),
v_data_array(16),
v_data_array(17),
v_data_array(18),
v_data_array(19),
v_data_array(20),
v_data_array(21),
v_data_array(22),
v_data_array(23),
v_data_array(24),
v_data_array(25),
v_data_array(26),
v_data_array(27),
v_data_array(28),
v_data_array(29);
v_line := NULL;
v_sr_no := v_sr_no + 1;
END IF;
END LOOP;
COMMIT;
END;
I think you need to remove first row (header row) from file
Hi,
I am using this code as it is in my apex environment to load the data from csv to database table but facing the error as NO DATA found.
I have did some exception handling and able to know that my select query is always get the data while selecting blob content from file.
I have created one file browse and submit button on the page. The button is branched to same page after submission.
There are 2 process I used on submit after computation and validation
1. This process is dumping the csv file to my attachment table.
It is working fine —
==========================================
declare
file_num number(8);
begin
select UPLOAD_FILE_TEST_seq.nextval into file_num from dual;
if ( :P58_FILE is not null ) then
insert into UPLOAD_FILE_TEST(file_no,file_name,file_container,mime_type)
select file_num,filename, blob_content, mime_type from wwv_flow_files where name = :P58_FILE;
delete wwv_flow_files where name = :P58_FILE;
end if;
end;
============================================================
2. The second process I used your code and getting issues of NO DATA FOUND –
=======================================================
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
BEGIN
select FILE_CONTAINER into v_blob_data from (select * from UPLOAD_FILE_TEST order by file_no desc)
where rownum < 2;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
IF v_char = CHR(10) THEN
v_line := REPLACE (v_line, ',', ':');
v_data_array := wwv_flow_utilities.string_to_table (v_line);
EXECUTE IMMEDIATE 'insert into DATA_FILE (name, subject, marks, grade) values (:1,:2,:3,:4)'
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4);
v_line := NULL;
END IF;
END LOOP;
END;
=================================
3. hex_to_decimal function code –
create or replace function hex_to_decimal
( p_hex_str in varchar2 ) return number
is
v_dec number;
v_hex varchar2(16) := '0123456789ABCDEF';
begin
v_dec := 0;
for indx in 1 .. length(p_hex_str)
loop
v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1;
end loop;
return v_dec;
end hex_to_decimal;
==================
The early response is highly appreciated.
Thanks
Jitendra
Thanks a lot for this walkthrough! Helped me do exatly what I needed! I just want to add that if the user does not wish to use wwv_flow_files but want to put the file in a custom table of their choice, then they can benefit from the below page I found:
http://oraexplorer.com/2011/03/file-browser-in-apex-4-with-blob-column-specified-in-item-source-attribute/
Hi,One of my columns in the csv file contains data with double quotes.For eg:16/9″.
When this data is being read,it is being read as “9/16″”” and is inserted into the table as “9/16″””
Is there any way to overcome this?
Thanks.
Hi,One of my columns in the csv file contains data with double quotes.For eg:9/16″.
When this data is being read,it is being read as “9/16″””.
Is there any way to overcome this?
Thanks.
One small tip. You don’t need to have the function hex_to_decimal in your database. You can replace this code : ‘ v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));’ with this one : ‘v_char := CHR(TO_NUMBER(v_raw_chunk, ‘xx’));’
This works great and meet my need perfectly! I did encounter the “ORA-06502: PL/SQL: numeric or value error: character to number conversion error” when there was a numeric value in the last column and it was resolved by adding “v_line := substr(v_line, 1, length(v_line)-2);” as specified above.
Thanks for a great post!
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
delete from DWH.TMP_CITY_CODE_APEX;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(DWH.hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into TMP_APEX_CITY_CODE (CITY_CODE, YEAR, YM01,YM02,YM0,YM04,YM05,YM06,YM07,YM08,YM09,YM10,YM11,YM12,
YM01_EUR, YM02_EUR, YM03_EUR, YM04_EUR, YM05_EUR, YM06_EUR, YM07_EUR, YM08_EUR, YM09_EUR, YM10_EUR, YM11_EUR, YM12_EUR,
YM01_TRL, YM02_TRL, YM03_TRL, YM04_TRL, YM05_TRL, YM06_TRL, YM07_TRL, YM08_TRL, YM09_TRL, YM10_TRL, YM11_TRL, YM12_TRL)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11)’
USING
v_sr_no,
v_data_array(1),
to_number(v_data_array(2)),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
to_number(v_data_array(7)),
to_number(v_data_array(8)),
to_number(v_data_array(9)),
v_data_array(10);
— Clear out
v_line := NULL;
v_sr_no := v_sr_no + 1;
END IF;
END LOOP;
END;
My code below, but ı got error conversation number
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
delete from DWH.TMP_CITY_CODE_APEX;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(DWH.hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into TMP_APEX_CITY_CODE (CITY_CODE, YEAR, YM01,YM02,YM0,YM04,YM05,YM06,YM07,YM08,YM09,YM10,YM11,YM12,
YM01_EUR, YM02_EUR, YM03_EUR, YM04_EUR, YM05_EUR, YM06_EUR, YM07_EUR, YM08_EUR, YM09_EUR, YM10_EUR, YM11_EUR, YM12_EUR,
YM01_TRL, YM02_TRL, YM03_TRL, YM04_TRL, YM05_TRL, YM06_TRL, YM07_TRL, YM08_TRL, YM09_TRL, YM10_TRL, YM11_TRL, YM12_TRL)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11)’
USING
v_sr_no,
v_data_array(1),
to_number(v_data_array(2)),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
to_number(v_data_array(7)),
to_number(v_data_array(8)),
to_number(v_data_array(9)),
v_data_array(10);
— Clear out
v_line := NULL;
v_sr_no := v_sr_no + 1;
END IF;
END LOOP;
END;
i am trying to uplad on excel file can suggest the changes and also i tried your code with .csv file it didnt work can you help.The csv file had only on column
Few important points to note:
1. It will not be useful to import an xls spread. This is only to work with .csv files. If you need to use xls, use Apex Listener EA2.
2. You will have to modify the code so that the commas in your actual data will remain as it is. Right now, it will throw an error if there is a comma in your data columns.
It works ! Great post, Thank you
For those who it may interests !
while converting the file from xls to csv
Make sure the separator is (,) not (;)
otherwise change it in the function : v_line := REPLACE (v_line, ‘,’, ‘:’);
to v_line := REPLACE (v_line, ‘;’, ‘:’);
regards,
Hello!
Does your procedure parse data in cyrillic?
I have a problem, I tried the main procedure. It is working fine. But when I put Arabic data in csv file, it appears null in the table after calling the procedure. is there a modification to make it read and insert arabic text? Please anybody could help me
Hi
I have null values in my csv and i am getting no data found error.
Please help me on this how i can overcome this?
regards
Chandrashekar
Hi,
i am having some null values in my csv, there i am getting no data found error.
Please help me on this, how i can overcome this?
regards
Chandu
I have tried this and everything works fine except for removing the first row as you suggested. It just puts the entire table with the header row into the table.
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
–v_rows number;
–v_sr_no number := 1;
BEGIN
–delete from CAR_TYPES;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where name = ‘F-850813000/upload_cars_data.csv’;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
v_line := NULL;
END IF;
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into CAR_TYPES (NAME, MANUFACTURER, COLOUR_1, COLOUR_2, COLOUR_3, TRIM, MPG, WEBSITE)
values (:1,:2,:3,:4,:5,:6,:7,:8)’
USING
v_data_array(1),
v_data_array(2),
v_data_array(3),
v_data_array(4),
v_data_array(5),
v_data_array(6),
v_data_array(7),
v_data_array(8);
— Clear out
v_line := NULL;
END IF;
END LOOP;
END;
/
The truth is, I would like to get just the first row of headers into the array so that I can work with that array. I removed the loop part and left just the section for getting the first row but the array seems to be empty.
Hi Advait,
Nice post. I want to ask if your method works for .xls files too or only .csv. And does it require apex listener to be installed. I am using Apex 4.1 with oracle http server.
Hi,
The code for skipping the first header row does not work. It still processes the first row even if we start with v_position := 2;
Any suggestions?
Great code. Took me a while to get it to work, but now it adds duplicates from the spread sheet into the table. The process usually adds all the rows twice, sometimes more depending on how many records are in the original sheet. HELP!
In my code it is not saving the last line of the csv
help me?
PROCEDURE STL_EXTRAIR_ASSINANTE(p_nome_arquivo in varchar2
, p_id_req in number) is
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
–v_sr_no number := 1;
v_rows_loaded NUMBER;
BEGIN
— Read data from wwv_flow_files
select BLOB_CONTENT into v_blob_data
from wwv_flow_files
where name = p_nome_arquivo;
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘;’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
–DELETE OLD DATA
— Insert data into target table
–IF v_sr_no >1 THEN
INSERT INTO sittel.stl_assinante(nm_assin, cpf, cnpj, documento, id_req, id)
VALUES (v_data_array(1), v_data_array(2),v_data_array(3), v_data_array(4), p_id_req, sittel.SEQ_STL_ASSINANTE.nextval);
–END IF;
— Clear out
v_line := NULL;
–v_sr_no := v_sr_no + 1;
END IF;
END LOOP;
commit;
END;
very much helpful indeed. but i am getting error : ORA-06502: PL/SQL: numeric or value error: character to number conversion error
this is my table structure:
CREATE TABLE RCMRKENT
(
RCWEBIDN VARCHAR2(50 BYTE),
RCROLLNO VARCHAR2(8 BYTE),
RCP01DES VARCHAR2(60 BYTE),
RCP01NUM NUMBER,
RCP02DES VARCHAR2(60 BYTE),
RCP02NUM NUMBER,
RCP03DES VARCHAR2(60 BYTE),
RCP03NUM NUMBER,
RCP04DES VARCHAR2(60 BYTE),
RCP04NUM NUMBER,
RCP05DES VARCHAR2(60 BYTE),
RCP05NUM NUMBER,
RCP06DES VARCHAR2(60 BYTE),
RCP06NUM NUMBER,
RCEXLIMG BLOB
)
and below is the code i used:
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
delete from RCMRKENT;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into RCMRKENT (
RCWEBIDN, RCROLLNO,
RCP01DES, RCP01NUM,
RCP02DES, RCP02NUM,
RCP03DES, RCP03NUM,
RCP04DES, RCP04NUM,
RCP05DES, RCP05NUM,
RCP06DES, RCP06NUM)
values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14)’
USING
–v_sr_no,
v_data_array(1), v_data_array(2),
v_data_array(3), to_number(v_data_array(4)),
v_data_array(5), to_number(v_data_array(6)),
v_data_array(7), to_number(v_data_array(8)),
v_data_array(9), to_number(v_data_array(10)),
v_data_array(11), to_number(v_data_array(12)),
v_data_array(13), to_number(v_data_array(14));
— Clear out
v_line := NULL;
— v_sr_no := v_sr_no + 1;
END IF;
END LOOP;
END;
please someone help me.
Hello,
Your code is working perfectly, but the only problem is it takes too much time.
I want to upload and retrieve data from CSV file with 67 columns and more than 3000 row at once.
Please help.
Has anyone found a way to speed this up? I am also uploading 30 columns and 25000 rows of data. Taking 4 to 5 minutes on average.
hi,
this works great thanks,but one doubt it is not working for excel with column headers i read the previous comments i tried those things also but its not working its showing “1 error has occurred
ORA-01858: a non-numeric character was found where a numeric was expected”. may i know how to get rid of this problem.
Thanks,
Regards,
pavan
Hi all,
I am using Apex for the first time. Having difficulties with the very well explained steps above. Can somebody please help me understand how to modify this code while applying to my table? I am trying to work with EMP table.
The error I am getting is ORA6502:Pl-sql error:Numeric or value error:character to number conversion error. As stated in the earlier comments, I have also added v_line := substr(v_line, 1, length(v_line)-2); before v_line := REPLACE (v_line, ‘,’, ‘:’);
Can anybody please tell me where I am making mistakes?
DECLARE
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_empno number := 1;
BEGIN
delete from EMP;
— Read data from wwv_flow_files
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER)
and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER);
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
— Read and convert binary to char
WHILE ( v_position <= v_blob_len ) LOOP
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
v_position := v_position + c_chunk_len;
— When a whole line is retrieved
IF v_char = CHR(10) THEN
— Convert comma to : to use wwv_flow_utilities
v_line := substr(v_line, 1, length(v_line)-2);
v_line := REPLACE (v_line, ‘,’, ‘:’);
— Convert each column separated by : into array of data
v_data_array := wwv_flow_utilities.string_to_table (v_line);
— Insert data into target table
EXECUTE IMMEDIATE ‘insert into EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
values (:1,:2,:3,:4,:5,:6,:7,:8)’
USING
v_empno,
v_data_array(1),
v_data_array(2),
to_number(v_data_array(3)),
to_number(v_data_array(4)),
to_number(v_data_array(5)),
to_number(v_data_array(6)),
to_number(v_data_array(7));
— Clear out
v_line := NULL;
v_empno := v_empno + 1;
END IF;
END LOOP;
END;
CREATE TABLE “WAQIS”
( “ID” NUMBER(10,0) NOT NULL ENABLE,
“NAME” VARCHAR2(1000) NOT NULL ENABLE,
“DESCRIPTION” VARCHAR2(1000)
) ;
CREATE OR REPLACE TRIGGER “WAQIS_I”
before insert on “WAQIS”
for each row
begin
if :NEW.”ID” is null then
select “WAQIS_SEQ”.nextval into :NEW.”ID” from sys.dual;
end if;
end;
/
ALTER TRIGGER “WAQIS_I” ENABLE;
create or replace package WAQISPKG as
function hex_to_decimal( p_hex_str in varchar2 ) return number;
procedure loadExcel;
end;
create or replace package body “WAQISPKG” is
procedure LOADEXCEL
as
v_blob_data BLOB;
v_blob_len NUMBER;
v_position NUMBER;
v_raw_chunk RAW(10000);
v_char CHAR(1);
c_chunk_len number := 1;
v_line VARCHAR2 (32767) := NULL;
v_data_array wwv_flow_global.vc_arr2;
v_rows number;
v_sr_no number := 1;
BEGIN
select blob_content into v_blob_data
from wwv_flow_files
where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = v(‘APP_USER’))
and id = (select max(id) from wwv_flow_files where updated_by = v(‘APP_USER’));
v_blob_len := dbms_lob.getlength(v_blob_data);
v_position := 1;
v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
v_line := v_line || v_char;
IF v_char = CHR(10) THEN
v_line := REPLACE (v_line, ‘,’, ‘:’);
v_data_array := wwv_flow_utilities.string_to_table (v_line);
v_line := NULL;
END IF;
v_line:=null;
WHILE ( v_position save as->choose a name for it -> file type must be: CSV(comma delimited)
6. save it on the desktop
7. go back to apex , upload the document and submit it
TIPS
don’t skip number 5
1. create all the above objects in the database
2. go to APEX and create a file browser item, a submit button and a branch to branch on the same page as sayed already at first
3. create a pl/sql process on the page and paste this code in the code area(this calls your procedure): WAQISPKG.loadExcel;
4. open the excel file in windows
5. go to file->save as->choose a name for it -> file type must be: CSV(comma delimited)
6. save it on the desktop
7. go back to apex , upload the document and submit it
TIPS
don’t skip number 5
It taking only chars. If i include numbers in excel it throwing error as ORA-06502: PL/SQL: numeric or value error: character to number conversion error.
Please help us.
I resloved the issue now its working fine. Thanks…
If there is a colon in your data, it will be treated like a delimiter by wwv_flow_utilities. Convert it to something else before replacing ‘,’ with ‘:’ and then convert it back after wwv_flow_utilities.string_to_table.
I Followed the same concept as you explained. Its Working Fine. Great.Nice article. Really Helpful.
for anyone that has CSV’s with > 10K rows, I would recommend the following: http://sourceforge.net/projects/lob2table/
Hi
Iam New to APEX. Iam in need to create a page to Upload Excel data into DB. Tried the above process but i get ‘No Data Found Error’. I added a File Browse item to page and a Upload button and created a Pl/Sql process (with this code defined here) to fire when click of the button. I get No Data Error, also my www_flow_files is empty, dont see the blob content loaded to it.
I also tried loading the blob content to my custom table, but could not parse it using the above script, changing the script to read blob content from my custom table instead of wwww_flow_files is not working.
Am i missing something in here. Any additional setup to be done?
Could someone help be making it work.
I followed the same concept as you explained. Its Working Fine with only one worksheet of EXCEL or with a CSV File. How can I upload an Excel with multiple Worksheet ? Could someone help me ?
your code is correct.but in excel file if last column is an type of “NUMBER” ,then code is not working.it will gives error “character to number conversion error” and in other situation if last column is an type of “CHAR” ,then code is in properly working
this is the error for last column- “ORA-06502: PL/SQL: numeric or value error: character to number conversion error
“
I’m new to APEX.
I have a problem with the first part of the process.
I can’t load the csv file on the wwv_flow_files table using the browse item type.
Instead, the button works correctly if I import the file with the apex wizard for import file.
many thanks
Marco
Hi,
I’m having “no data found” error only in date columns, I have changed to varchar2 and using and not using to_date and still have same problem… any solution?