Wednesday, December 3, 2025

Working with OCI CLI

 The CLI provides the same core functionality as we have in the Console, with additional commands.
Such as the ability to run scripts, extend Console functionality.

We can use CLI from Cloud Shell or from our Local machine.

The CLI is pre-configured with our credentials and ready to use immediately from within Cloud Shell.

Some of the usecases will be :

Working on OCI functions:

We can use OCI and Fn Project  CLI  to work on OCI functions , create function,push the docker image to OCR, invoke function etc.

Working on OCI Object Storage Bulk Operation:

Many a times we need to perform bulk operation like bulk download, bulk delete , Bulk restore objects from object storage bucktes. In such cases OCI cli is really helpful.

There are many such usecases where OCI CLI works as a very powerful tool.

In this blog we will see:

  • How to install OCI CLI locally
  • How to Authenticate to OCI
  • How to use the profile which  gets created locally
  • Run few commands from Local machine for OCI Object storage

In the next post we will see more about OCI functions setup, working and testing.

 

How to install OCI CLI locally

We can follow below Oracle doc  to install CLI based on our OS:

https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm

Before installing oci-cli if we type oci and press enter in terminal it will give below message:


 

To install oci cli Run the below command:

brew update && brew install oci-cli
 It will download and install all the dependency.
 It will take few mins to complete.
 If the link has not happened then run:
 brew link oci-cli  
 
Once the cli is installed :
type the command oci and press enter:
we should see like below:
 
 
Now let us run the actual commands.
To do this we need to first authenticate to the OCI
so type the below command and run:
oci session authenticate 

It will ask for region to select:


Ours is us-ashburn-1 so we just need to type 50 and press enter


We may get popup like below, click on allow:
This will open to authenticate from browser.
Select your tenancy and click continue:
 
 
We can have multiple sign in options based on the identity provider setup we have in OCI.
we can select the option we want to login with:
We will take the Onelogin option, this is sso  federate one:
 
 
Once successfully logged in from browser we will get below message in browser:
 
 
Inter terminal it will be like below:
When we are doing for the first time then we it creates a DEFAULT profile in config
 which can be used to run OCI CLi commands:
 
 
Copy the below command (this is a sample command to test oci connectivity with the profile that 
got created):
oci iam region list --config-file /Users/sumit_kumar/.oci/config --profile DEFAULT --auth security_token 
We can see this in .oci folder
 
 
 
We can go see config contents to see the profile details :
type less config and press enter
 
 
 
 
When the session expires then we need to refresh the token with below command:
oci session refresh 

We will get message like below when token is successfully refreshed:
 
 
Now let us see how we can run commands to access OCI services.
Run the command we had copied after authentication:
oci iam region list --config-file /Users/sumit_kumar/.oci/config --profile DEFAULT --auth security_token 
 
This will give region list also ensures the oci connectivity from local machine.
 
 
 
Every time we run any command we need to give the profile for security token detail like above.
Since it is constant for every command run we can create alias for oci profile and use.
alias op='oci --profile DEFAULT --auth security_token' 
 
Then we just need to type the below command for region list :
op iam region list 
 
 
 
 Now when we authenticate next time with oci session authenticate
 , it will ask to create new profile:
We can give different profile name or same name and use accordingly.
 
 
 
 
Now let us see how we we can use oci cli to download object store objects in bulk:
 
 
 We will download the below files in bulk:
 
 
Since we are already authenticated we just need to run the command ,
with alias or with full command:
We will try with full command:
oci os object bulk-download --config-file /Users/sumit_kumar/.oci/config --profile DEFAULT 
--auth security_token --namespace-name tenancy_name  --bucket-name cusa-prod-xyz-integrations 
--prefix xyz/Alteryx/Aug23New --download-dir . 
 
 
 
 
 
We get response like below:
 
 
Now we will verify from directory if all 22 files were downloaded successfully:
 
 
All the files were downloaded successfully.
Thus we see that how useful OCI CLI was in this example. It helped us downlaod all the files from 
the given folder from bucket in a single click. Otherwise we would have had to download one by one 
from UI.
 
Like this there are many other usecases where OCI CLI helps.
One example is working on OCI Function.
In the next post we will see how we can install fn project cli, work on OCI fuctions, 
login to docker and push our OCI function docker image to OCR and invoke these functions.
 

A Simple VBCS App to Query Oracle Fusion Tables Dynamically

Many a times we need to write sql queries to get data out of Oracle Fusion.

For this we have BI Publisher data model. Everytime we want to change the query we need to edit the data model.

Also it exports data in xml and only upto 200 records. If we want to export records more than 200 records then we need to create a report using  this data model.

In this post we will see how we can create a simple VBCS app to run  queries dynamically.

For this we will need below things:

1). A BI publisher data model to accept dynamic query as base64 encoded string

2). A BIP report.

3). An OIC integration to run report and provide output to VBCS app

4). A VBCS app to provide UI for running query and seeing results.

-> We will be importing and using fast-xml-parser library to convert data from  xml to json.

Let us see each steps in detail.

1). A BI publisher data model to accept dynamic query as base64 encoded string

We need to create a BI Publisher data model with PLSQL block. Like below.

 


 

We need to create two parameters query1 - which will accept base64 encoded sql query.

Since it is string so the max character limit will be 32767. Here in example we have made 32000.

The second parameter will be xdo_cursor.

The data model query block is:

DECLARE
type refcursor is REF CURSOR;
xdo_cursor  refcursor;
var varchar2(32000);
BEGIN
var := utl_raw.cast_to_varchar2(utl_encode.base64_decode(utl_raw.cast_to_raw(:query1)));
OPEN :xdo_cursor FOR var;
END;

 

Now lets run a query and save the sample data so that we can use the saved sample data to create report.

Lets write a simple query(There should not be any semicolon at the end):

select invoice_id, invoice_num from ap_invoices_all

Now we need to convert this to base64 because if we pass the query directly then it will give error.

 We can use any text editor or use online tool to convert to base64.


The encoded string for above query is:

c2VsZWN0IGludm9pY2VfaWQsIGludm9pY2VfbnVtIGZyb20gYXBfaW52b2ljZXNfYWxs

Now lets pass this in our datamodel query1 parameter and click on view:

 



We will get the output like below:

 

Save this as sample data.

Now lets create a report.

2). A BIP report.

Select the data model for report:  

No need to have a layout as it will give data XMl as output. We can create template and give output type as well.

Save the report and then view report.

 

 

We can export this report as Data xml.

 

 

3). An OIC integration to run report and provide output to VBCS app

We will create a simple OIC integration which will accept the base64 encoded query from VBCS ,runs the BIP report and returns base64 report bytes to VBCS.

 


 

 

 

 

 Now call BIP report using a soap adapter:

 

 

Give attributeFormat as xml

 

 

Finally send the response back to VBCS:

 

 

4). A VBCS app to provide UI for running query and seeing results.

This is the start and end of the whole process.

In the sample app we have given a text area to enter the query ad then a run icon.

There is a progress bar, to display the progress of steps in query execution, a table to display the results and an export button to export the result in csv.

The UI looks like below:


 

Now lets run a query:

 

 

 The Result will look like below:


We can export this output to csv.

Another query example with commenting the previous one:  


 


We need to do below things in VBCS:

  • From the text area whatever query we are having , we need to encode that to base64 through javascript:

Sample Function:

EncodeToBase64(arg1) {
let encoded = window.btoa(arg1);
return encoded;
}
 
The argument will be the query text from text area. 
  • Call rest endpoint(the OIC integration) by passing this base64 string
  • The response from integration will be base64 report byte . We need to decode that using JS.
The sample Function: 

DecodeOutput(arg1) {
let decoded = window.atob(arg1);
return decoded;

}
 
  • This will return the xml, we need to parse this xml and convert to json.
  •  We will use fxp(fast-xml-parser) library.
We can get that from here: https://naturalintelligence.github.io/fast-xml-parser/lib/fxp.min.js 
For the import in VBCS we need to do below things:
  • Place the library in our resources/js folder.

Make an entry in index.html as follows : <script src="resources/js/fxp.min.js"></script>

 

The sample js function to convert xml to json will be:

ConvertXmlToJson(arg1) {

return new Promise(function(resolve, reject) {

let data = arg1;
const parser = new fxp.XMLParser();
let result;
try {
result = parser.parse(data);
//console.log(result);
resolve(result);
} catch (err) {
console.log(err);
reject(err);
}

});
}
 
Now we have the json. We need to display this data tin a table.
We need to have column headers and data.
We will create two variables for table column and data:
Table Column data var as any type and  table data var as ADP.
        
 
Below is the sample function to create column header for the table:
 
ParseReportJsonReturnTableHeaders(arg1) {
var ColumnArrayOfJsonObject = [];
var columnAttributes = ["headerText", "field"];
var valueVar = {};
var columnObjArray = [];
var i = 0;
const obj = JSON.parse(arg1);
var MaxHdrCounter = 1;
for (var key in obj) {
if (key == "ROWSET") {

// console.log("Rowset Key is: " + key);
//console.log("Objects under Rowset :" + obj[key]);

var count1 = Object.keys(obj[key]).length;
//console.log("Number of row item : " + count1);
for (var rowkey in obj[key]) {
if (MaxHdrCounter > 1)
break;
if (Array.isArray(obj[key][rowkey])) {
if (MaxHdrCounter > 1)
break;
console.log("it is an array");
for (var actualrowkeyArray in obj[key][rowkey]) {
if (MaxHdrCounter > 1)
break;
for (var innermostkeyArray in obj[key][rowkey][actualrowkeyArray]) {
//console.log("Actual Key is: " + innermostkeyArray);
//console.log("Actual value: " + obj[key][rowkey][actualrowkeyArray][innermostkeyArray]);
valueVar.headerText = innermostkeyArray;
valueVar.field = innermostkeyArray;
ColumnArrayOfJsonObject.push({
...valueVar
});

//console.log(JSON.stringify(ColumnArrayOfJsonObject));

MaxHdrCounter = MaxHdrCounter + 1;
}
}
} else {
for (var actualrowkeyNonArray in obj[key][rowkey]) {


//console.log("Actual Key in Non Array : " + actualrowkeyNonArray);
//console.log("Actual value in Non Array: " + obj[key][rowkey][actualrowkeyNonArray]);
valueVar.headerText = actualrowkeyNonArray;
valueVar.field = actualrowkeyNonArray;
ColumnArrayOfJsonObject.push({
...valueVar
});
console.log(JSON.stringify(ColumnArrayOfJsonObject));
MaxHdrCounter = MaxHdrCounter + 1;

}
}

}
}

}
return ColumnArrayOfJsonObject;
}
 

Similarly we need to do for data in table.
We will be using Array Data Provider.
By default it will not be available to use in the JS function. We need to import it like below:
 
At the begining(first line) we need to update like this: 
 
define(['ojs/ojarraydataprovider'], (ArrayDataProvider) => {
 
Below is  the sample function to create data for the table:
 
ParseReportJsonReturnTableData(arg1) {
var DataArrayOfJsonObject = [];
var JsonObjVar = {};
var columnName = [];
var dataRowArray = [];
var i = 0;
const obj = JSON.parse(arg1);
var MaxHdrCounter = 1;
for (var key in obj) {
if (key == "ROWSET") {

//console.log("Rowset Key is: " + key);
// console.log("Objects under Rowset :" + obj[key]);

var count1 = Object.keys(obj[key]).length;
//console.log("Number of row item : " + count1);
for (var rowkey in obj[key]) {

if (Array.isArray(obj[key][rowkey])) {

//console.log("it is an array");
for (var actualrowkeyArray in obj[key][rowkey]) {

for (var innermostkeyArray in obj[key][rowkey][actualrowkeyArray]) {
//console.log("Actual Key is: " + innermostkeyArray);
//console.log("Actual value: " + obj[key][rowkey][actualrowkeyArray][innermostkeyArray]);
columnName[i] = innermostkeyArray;
//console.log("Current Column Name is : "+columnName[i]);
dataRowArray[columnName[i]] = obj[key][rowkey][actualrowkeyArray][innermostkeyArray];
i = i + 1;

}
JsonObjVar = dataRowArray;

DataArrayOfJsonObject.push({
...JsonObjVar
});
}
} else {
for (var actualrowkeyNonArray in obj[key][rowkey]) {
// console.log("Actual Key in Non Array : " + actualrowkeyNonArray);
//console.log("Actual value in Non Array: " + obj[key][rowkey][actualrowkeyNonArray]);
columnName[i] = actualrowkeyNonArray;
//console.log("Current Column Name is : "+columnName[i]);
dataRowArray[columnName[i]] = obj[key][rowkey][actualrowkeyNonArray];
i = i + 1;
}
JsonObjVar = dataRowArray;

DataArrayOfJsonObject.push({
...JsonObjVar
});

// console.log("Data Array :"+dataRowArray);
}

}
}

}

//console.log("Data Array for Table: "+JSON.stringify(DataArrayOfJsonObject));


return new ArrayDataProvider(DataArrayOfJsonObject, {
keyAttributes: '@index'
});
}
 
 
In the action chain we will update table column data var and table data var with the response of above two functions.
 
Now in table we can map table column and data separately like below:


 
In the code mode it will look like below: 

 

Note:-  We can combine some funtions together which we saw above, to reduce total number of functions.

This is a simple app which explains flow of how we can query dynamically from VBCS  , present the data in tabular form after parsing and also providing option to download data in csv( We need to seacrh and install export data component from components).

The mapping for export data will look like below:



We can Enhance this application to have features like 

  • Format Query
  • Query history - Store  the executed  queries in ATP with env, query,user, unique id and status
  • Rerunning the past queries - Fetch the query from ATP and allow to rerun
  • Running queries for different Fusion Environments from the same page  - Create  OIC connection for diff ENV and have branches in the int for each env. Have LOV in VBCS page and accordingly pass to integration.
  • Having links for fusion tables etc - This will help making easy to find the Fusion tables, We can give OER link for each module tables.

 
as shown in below screenshots.

 Before Formatting:


 

 After Formatting:


 

 

 Get the past run queries, Which we stored in ATP


 

  Select the Query which we want to rerun and click on highlighted rerun button

 

Run  queries in Different Fusion Environments , from single page without needing to open BIP of each env.



Link for Oracle OER tables for Different Modules