Views:

You can request a Sql.Db.Mine provider from FINBOURNE and run it locally to create and publish a custom provider in your LUSID domain that is able to read from, write to, or configure your own SQL database.

Note: You can connect to the following databases: MySql, Odbc, Oracle, PostgreSQL, SqLite, SqlServer, GoogleBigQuery, AwsAthena, Db2, Snowflake.

The goal is to enable end users to write simplified or more performant queries using the custom provider available from your domain's Luminesce catalog rather than interacting with the database directly, which they might not have the knowledge, infrastructure or permissions to do.

Note: To query your custom provider, an end user must have a valid user account in your LUSID domain, and suitable access control permissions to use that custom provider. More on permissions.

Consider the example of a CustomProviders.RetrieveInternalTransactionData custom provider you publish that:

  • Retrieves particular transaction data from your database in a highly performant manner, for example instrument names, unique identifiers, settlement dates, quantities and prices.
  • Calculates the cost of each transaction on-the-fly by multiplying quantity by price.

An end user can then write a simple Luminesce SQL query that retrieves data from your SQL database and processes it, perhaps by upserting it to LUSID. For example, the following query uses CustomProviders.RetrieveInternalTransactionData to retrieve all the transactions that occurred on 22 April 2022, and then the Lusid.Portfolio.Holding.Writer provider (supplied by FINBOURNE) to set them as holdings in LUSID.

Note: The column names retrieved from your database must match those expected by the FINBOURNE provider (hence some have been renamed).

@table_of_data = select
  'Finbourne-Examples' as PortfolioScope, 'UK-Equities' as PortfolioCode,
  SettleDate as EffectiveAt, Quantity as Units, Currency as CostCurrency, Cost as CostPrice,
  Figi,
  'Set' as WriteAction
from
  CustomProviders.RetrieveInternalTransactionData where SettleDate = #2022-4-22#;
select * from Lusid.Portfolio.Holding.Writer where toWrite = @table_of_data;

Contents

Hosting the Sql.Db.Mine provider

To run Sql.Db.Mine locally:

  1. Make sure your computer has .NET 6 installed.
  2. As a LUSID user with the lusid-administrator role, navigate to https://<your-domain>.lusid.com/honeycomb/swagger/index.html, where <your-domain> is the root of your LUSID domain, for example acmecorp. If you do not have the lusid-adminstrator role, see the appendix.
  3. Use Swagger's Try it out button for the DownloadBinary API to request the appropriate SqlDb_Providers_* binary, for example for PostGreSQL:
  4. Click the Download file link in the response to download the binary to your browser's download location: 
  5. Navigate to the download and run an appropriate command, omitting the .nupkg file extension and, on Mac, using the x64 version of dotnet:
    C:\Users\JohnDoe\Download> dotnet tool install Finbourne.Luminesce.DbProviders.PostgreSql -g --add-source "."
    Note that the service is installed in the following location:
    • Windows: %USERPROFILE%\.dotnet\tools\.store\
    • Linux/MacOS: $HOME/.dotnet/tools/.store/
  6. Run the following command to start the service (omitting the .exe extension on Linux or MacOS):
    C:\Users\JohnDoe\.dotnet\tools> luminesce-database-providers.exe --routeAs=User --quiet --authClientDomain=<your-domain> --certificateFallback=DownloadMayMint
    ...where <your-domain> is the root of your LUSID domain, for example --authClientDomain=acmecorp. The DownloadMayMint option mints a new certificate if you do not have one already. More information on managing certificates is coming soon.

    For information on all the command line options, run the command luminesce-database-providers.exe --help.

    Note: The above command shows a user setup; once you are ready to make your providers globally available you can switch to --routeAs=Global. You may need to switch to global routing if you experience the following error: 

    ACCESS_REFUSED - read access to topic '<your-domain>.#' in exchange 'external_fs_updates' in vhost 'honeycomb-grid' refused for user 'client-<your-domain>_<your-userID>'', classId=50, methodId=20))' for '_fsUpdateSub'
    
  7. When prompted in a browser window, authenticate to LUSID as a user with administrative privileges.

With the Sql.Db.Mine provider running locally, you can now write a Luminesce SQL query using any tool, for example the LUSID web app. Note the following:

  • You may need to click the Refresh button in the web app catalog (highlighted in red) to see Sql.Db.Mine.
  • FINBOURNE host a Sql.Db provider that may also appear in your catalog. Do not use this. Use Sql.Db.Mine.
  • You can rename Sql.Db.Mine.

To stop Sql.Db.Mine, press CTRL-C. To start it again, rerun the luminesce-database-providers.exe command (it should be faster subsequent times). Your LUSID credentials are remembered for one hour, after which you must authenticate again.

To uninstall the package, perhaps in order to upgrade to a newer version, run the command dotnet tool uninstall Finbourne.Luminesce.DbProviders.PostgreSql -g.

Understanding access control permissions

You, and any other LUSID administrative user wishing to run queries against Sql.Db.Mine in order to publish custom providers for end users, must have access control permissions to use the Sql.Db.Mine provider. The general principle of creating policies for Luminesce providers, assigning them to roles and roles to users, is explained in this article. Note you should automatically have permission if you are the LUSID domain owner with the built-in lusid-administrator role.

You, and any other LUSID administrative user wishing to run queries, should also have access control permissions to use connection string aliases in order to connect Sql.Db.Mine to your SQL database.

Your end users should not have access control permissions to use Sql.Db.Mine. Instead, they require access control permissions just for those published custom providers you wish them to use.

Renaming Sql.Db.Mine

You can optionally rename the Mine suffix (though not the Sql.Db prefix). To do this, open <binary-install-loc>/api-settings.json and change:

"ProviderNameFormat": "{0}.Mine",

to (for example):

"ProviderNameFormat": "{0}.Oracle",

The new name is reflected in the Luminesce catalog the next time you sign in. Note you will need to restart Sql.Db.Mine if you change this file while the service is running.

Connecting to your SQL database

You can connect to any supported SQL database for which you have a valid connection string.

You can hard-code the connection string in the provider (using the --connection option), and this might be more convenient when creating and testing your provider.

For publication, however, we recommend specifying a connection string alias in the <binary-install-loc>/database-provider-settings.json file, for example:

"ConnectionStrings":
 [
   {
     "Name":"MyDb",
     "ConnectionString":"DataSource=./Tests/TestData/sakila.sqlite",
     "LicenceCode": "honeycomb-standard-access"
   }
 ], 

You can then use the alias Name with the --connection option. Note you will need to restart Sql.Db.Mine if you change this file while the service is running.

To connect via aliases you must also create a policy consisting of the following JSON, assign it to a suitable role, and assign that role to your LUSID user (or any LUSID administrative user wishing to run queries against Sql.Db.Mine):

{
   "description": "Enable the Sql.Db.Mine provider to use connection string aliases",
   "applications": [
       "Honeycomb"
   ],
   "grant": "Allow",
   "selectors": [
       {
           "idSelectorDefinition": {
               "identifier": {
                   "code": "Connection/*",
                   "scope": "Honeycomb"
               },
               "actions": [
                   {
                       "scope": "Honeycomb",
                       "activity": "Execute",
                       "entity": "Feature"
                   }
               ],
               "name": "Luminesce connection string aliases",
               "description": "Enable the Sql.Db.Mine provider to use connection string aliases"
           }
       }
   ],
   "when": {
       "activate": "2022-03-03T00:00:00.0000000+00:00",
       "deactivate": "9999-12-31T23:59:59.9999999+00:00"
   }
}

Getting help

You can access the latest documentation in the help page for the Sql.Db.Mine provider at any time by running this query:

@x = use Sql.Db.Mine
--help
enduse;

Alternatively, run the following query to retrieve the help content as a table:

select distinct CustomSyntax from Sys.Registration where Name = 'Sql.Db.Mine'

Understanding the Sql.Db.Mine provider syntax

Sql.Db.Mine is a direct provider, and therefore follows these syntactic conventions.

The precise syntax is as follows:

@x = use Sql.Db.Mine [with @@scalar_var1 [, @@scalar_var2...]]
<options-section>
----
<sql-for-reading-section>
----
<sql-for-writing-section>
----
<sql-for-configuring-section>
enduse;
-- The following statement is optional but useful while creating the provider to test it returns meaningful data
select * from @x

Note the following:

  • Each <section> must be separated by a line consisting of at least four hyphens (and no other characters).
  • <options-section> 
    An option takes the form --<option>=<value>, for example --provider=Provider.MyProvider. Note no spaces are allowed either side of the = operator. If an option takes a boolean value, then specifying that option (for example --allDomains) sets it to True; omitting the option specifies False.

    The --type and --connection options are mandatory. To list all the options, access the help page and examine the table at the bottom. The options in the following table are particularly relevant:
    OptionExplanation
    --typeMandatory. Valid arguments are: MySql, Odbc, Oracle, PostgreSql, SqLite, SqlServer, GoogleBigQuery, AwsAthena
    --connection

    Mandatory. This can either be:
     

    • A connection string, for example --connection="Data Source=./Tests/TestData/sakila.sqlite;"
    • The Name of a connection string alias, for example --connection="MyDb"
    --provider=<name>Mandatory in order to publish your custom provider with the specified name in the Luminesce catalog, for suitably-permissioned end users to discover and query. Note it may be more performant to omit this option while creating and testing your provider.
    --withWriterMandatory if you include a <sql-for-writing-section> in order to write to your SQL database.
    --commitEvery=<value>

    When writing data, performs a commit every <value> rows. For example, specifying --commitEvery=10000 performs a commit every 10,000 rows. If this option is not specified, one commit is performed at the end of the operation; if an error occurs, all data the operation has written so far is cancelled.

    --parameters
    <name>,<type>,<value>
    ,true/false[,description]

    Defines scalar variables that can be passed into the custom provider by end users. Each declaration:
     

    • Must be on a separate line after --parameters, so if specified this option should always be last in the list.
    • Must have a name and data type (either Boolean, Int, BigInt, Double, Decimal, Text, Date, DateTime).
    • Must have an initial value appropriate to the data type. To include quote or comma characters in a Text value, encapsulate it in double quote marks (see the example below). Specify true as the fourth argument to make the initial value also the default value; specify false to use the initial value only when setting up the provider, and not when end users query the published version.
    • Can have a description that appears as a tooltip for end users in the Luminesce catalog. To include quote characters in the tooltip, encapsulate them in double quote marks:
    @x = use Sql.Db.Mine
    --type=SqLite
    --connection="MyDb"
    --description="This is a tooltip for the provider as a whole"
    --parameters
    AssetClass,Text,Equity,true
    Isin,Text,GB124892739,true
    BondType,Text,"Vanilla, Complex and Inflation-Linked",true,"This is a parameter tooltip"
    AsAt,DateTime,0001-01-01,false,"This will ""appear"" as a parameter tooltip"
    ----

    See also the #PARAMETERVALUE macro in the section below for inserting parameter values into the provider at the appropriate location.

    --deleteProviderRemoves a published provider from the Luminesce catalog. More information.
    --limit=<n>
    --offset=<n>

    --groupBy=<pipe-delimited(|)-set-of-expressions-to-group-by>
    --select=<pipe-delimited(|)-set-of-expressions-to-filter-by>
    --filter=<filter-expression>
    --distinct
    Specifying these options optimises performance in conjunction with the appropriate macros (see the section below) while you are creating and testing the custom provider. Consider the following example:
    @x = use Sql.Db.Mine
        --type=SqLite
        --connection="MyDb"
        --distinct
        --filter=Isin is not null
        --limit=42
        ----
        select
            #DISTINCT
            #SELECT { { Isin : Isin } }
            from my_database_table where
            #RESTRICT { { Isin : Isin } }
        #LIMIT
        enduse;
        select * from @x

    Each time you run this query during the development phase, a maximum of 42 different bonds with ISIN identifiers is returned, enabling you to test the provider returns meaningful data in a reasonable timeframe.

    Note these options have no effect on a published provider, so you can either retain or omit them when you are ready to add the --provider option and go live.

  • <sql-for-reading-section>
    <sql-for-writing-section>
    <sql-for-configuring-section> 
    Nominally optional, at least one of these sections is mandatory in order for your custom provider to perform a useful operation. They must be in the specified order. An ENDUSE statement must terminate whichever is the last of these sections.

Reading from your SQL database

You must write a SQL query in the <sql-for-reading-section> that is valid for the SQL database you are connecting to.

We recommend including the macros in the following table in your SQL query to optimise performance. 

Note: For more information on each macro, and for available macros that are not included below (such as #CHECKACCESS, #USERID and #CLIENTDOMAIN), access the help page.

MacroUse # of timesExplanation
#SELECT{{<column>}, [{<column>}…]}Many

Optimises the performance of retrieving data from your SQL database.
 

Each <column> argument consists of a column name and an SQL statement that defines how that column is populated, separated by a colon. For example:

@x = use Sql.Db.Mine
----
select #SELECT {
  { Instrument: i.DisplayName },
}
from my_database_table i
enduse;

…specifies that an end user can write a query returning an Instrument column that is populated with values retrieved from the DisplayName column in my_database_table. If the end user does not explicitly request the Instrument column in a particular query, the custom provider does not retrieve the DisplayName column from your SQL database.
 

You could omit the #SELECT macro and create this view as follows:

@x = use Sql.Db.Mine
----
select DisplayName as Instrument from my_database_table
enduse;

…but this would impair performance. The custom provider will always retrieve the DisplayName column from your SQL database whether the end user requests the Instrument column or not, increasing bandwidth and round-trip time unnecessarily.
 

Note you can append one or more characters to a column name to distinguish that column in the custom provider:

  • Append + to make the column a primary key.
  • Append ^ to make the column a 'main' field (one returned by a query that starts select ^ from...)
  • Append ~ followed by either Boolean, Int, BigInt, Double, Decimal, Text, Date or DateTime to explicitly set the data type.
  • Append |"<string>" to give the column a description in the catalog (only " may not appear in that description).

For example:

select #SELECT {
  { some-column-name+^~Int|"This is an important column" : sql-for-special-column },
  { next-column-name : sql-for-ordinary-column },
}
#SELECT_AGG{{<column>}, [{<column>}…]}OneUse this macro once instead of #SELECT in order to pass any aggregation operations that an end user performs (such as count, min, max or average) down to your SQL database, further improving performance.
#RESTRICT{{<column>}, [{<column>}…]}ManyOptimises the performance of any WHERE clause filters specified by an end user on the named set of columns, passing the filters down to your SQL database.
#RESTRICT_AGGOneUse this macro once instead of #RESTRICT and in conjunction with #SELECT_AGG to optimise the performance of filters specified by the end user on the same set of columns as #SELECT_AGG  (which is why this macro has no <column> arguments).
#PARAMETERVALUE(<NAME>)ManyInserts the named --parameters value specified by an end user at an appropriate point in the custom provider. For example:
@x = use Sql.Db.Mine
--parameter
AssetClass,Text,Equity
----
select * from my_database_table where Type is #PARAMETERVALUE(AssetClass)
enduse; 
Note parameters with a data type of Date or DateTime must be assigned to an intermediate scalar variable; see this example from the similar Sys.Admin.SetupView provider. This is a temporary restriction that may be removed shortly.
#LIMITOneInserts a LIMIT clause specified by an end user at an appropriate point in the custom provider, optimising performance. Since this macro has no effect if the end user does not specify a LIMIT clause, it is typically always useful to include this macro.
#OFFSETOneFills in the OFFSET clause as required by the SQL dialect.
#DISTINCTOneInserts a SELECT DISTINCT statement specified by an end user at an appropriate point in the custom provider, optimising performance. Since this macro has no effect if the end user does not specify a SELECT DISTINCT statement, it is typically always useful to include this macro.
#GROUPBYOneInserts a GROUP BY statement specified by an end user at an appropriate point in the custom provider, optimising performance. Since this macro has no effect if the end user does not specify a GROUP BY statement, it is typically always useful to include this macro.
#ORDER_BYOneAllows more limits to be applied, and in the case where they exist can significantly optimise performance. Requires the presence of both #SELECT_AGG and #RESTRICT_AGG in the query.

For example, to create our example CustomProviders.RetrieveInternalTransactionData provider:

@x = use Sql.Db.Mine
--provider=CustomProviders.RetrieveInternalTransactionData
--type=SqLite
--connection="MyDb"
----
-- Note the syntax in this section depends on the SQL database you are connecting to
select

#DISTINCT
-- Using the #SELECT_AGG macro here (as opposed to #SELECT):
-- 1. Avoids duplicating the work of the #RESTRICT macro
-- 2. Improves performance by passing aggregate queries down to the SQL database.
-- Here we rename database columns for end users, as well as defining how the Cost column is calculated
#SELECT_AGG
{
  { Name: instrument },
  { Figi: figi },
  { SettleDate: txn_date},
  { Currency: currency },
  { Quantity: units },
  { Price: price },
  { Cost: price * units },
}

from
 my_database_table

where
 -- Enables filtering on all the columns specified in the #SELECT_AGG macro.
 #RESTRICT_AGG

#GROUP_BY

#LIMIT

enduse;
select * from @x

Note the following:

  • When run, this query publishes a CustomProviders.RetrieveInternalTransactionData provider in the Luminesce catalog.
  • An end user can filter/return seven columns from the SQL database table (Name, Figi, SettleDate, Currency, Quantity, Price and Cost).
  • Values in the virtual Cost column are calculated on-the-fly by multiplying the database price by units columns.
  • If an end user writes a query that aggregates any of the columns in the SELECT_AGG macro, the aggregation is passed down by the RESTRICT_AGG macro to the SQL database, improving performance. An example of such a query might be:
    select SettleDate, count(*) TransactionCount, max(Cost) as MostExpensive
    from CustomProviders.RetrieveInternalTransactionData
    where Currency = 'GBP' group by 1

Writing to your SQL database

In addition to a SQL query in the <sql-for-reading-section>, you can specify the --withWriter option and write a SQL query in the <sql-for-writing-section>. This creates a writer provider, alongside the read provider, that end users can use to write to the SQL database.

For example, to create a CustomProviders.RetrieveInternalTransactionData.Writer provider:

@some_data = select 'AVIVA PLC' as Name, 'BBG000DCK9D2' as Figi, #2024-07-15# as SettleDate,
'GBP' as Currency, 300 as Quantity, 392 as Price;

@x = use Sql.Db.Mine with @some_data
      --provider=CustomProviders.RetrieveInternalTransactionData
      --type=SqLite
      --connection=MyDb
      --withWriter
      ----
-- Create a provider to read from `my_database_table` called `CustomProviders.RetrieveInternalTransactionData`:
select

#DISTINCT
#SELECT_AGG
{
  { Name: instrument },
  { Figi: figi },
  { SettleDate: txn_date},
  { Currency: currency },
  { Quantity: units },
  { Price: price },
  { Cost: price * units }
}
from
 my_database_table
where
 #RESTRICT_AGG
#GROUP_BY
#LIMIT
      ----

-- Use an upsert query to write to `my_database_table`. 
-- This creates a writer provider called `CustomProviders.RetrieveInternalTransactionData.Writer`:

    INSERT INTO my_database_table (#INSERT_COLUMN_NAMES
        {
          { Name : instrument },
          { SettleDate : txn_date },
          { Quantity : units },
          { Price : price }
        })
      VALUES (#INSERT_COLUMN_VALUES)
      ON CONFLICT (Figi)
      DO UPDATE SET
      updated_at = CURRENT_TIMESTAMP,
      #UPDATE_COLUMN_ASSIGNMENTS
      {
      { Name : instrument = excluded.instrument },
      { Figi : figi = excluded.figi },
      { SettleDate : txn_date = excluded.txn_date },
      { Currency : currency = excluded.currency },
      { Quantity : units = excluded.units },
      { Price : price = excluded.price }
      }
      WHERE
      #INSERT_COLUMN_CHANGED_FILTER

    enduse;

 select * from @x

This publishes two providers in the Luminesce Catalog: a read provider CustomProviders.RetrieveInternalTransactionData and a write provider CustomProviders.RetrieveInternalTransactionData.Writer.

Note: The SQL required for the <sql-for-writing-section> varies depending on the database type you’re connected to. Appendix B provides an example of the form required for each supported database type.

We recommend including the macros in the following table in your SQL query to optimise performance. For more information on each macro, and for available macros that are not included below, access the help page.

MacroUse # of timesExplanation
#INSERT_COLUMN_NAMES({{<column>}, [{<column>}…]}})One

Passes column names the end user specifies down to the SQL database. Use the {{ <provider-column-name> : <database-column-name> }} syntax to define how a provider column name that doesn’t match a column name in the database should be provided to the SQL database.

For example:

INSERT INTO my_database_table (#INSERT_COLUMN_NAMES
        {
          { Name : instrument },
          { SettleDate : txn_date },
          { Quantity : units },
          { Price : price }
        })
#INSERT_COLUMN_VALUESOneAssigns values to columns and passes them down to the SQL database.
#UPDATE_COLUMN_ASSIGNMENTSOne

Specifies how to assign values to columns.

Use the {{ <column> : <value> }} syntax to define how values provided by the end user are provided to the SQL database. Each {{ <column> : <value> }} argument consists of a column name and an SQL statement that defines how that column is populated, separated by a colon.

For example:

INSERT INTO my_database_table (#INSERT_COLUMN_NAMES)
    VALUES (#INSERT_COLUMN_VALUES)
    ON CONFLICT (Figi)
      DO UPDATE SET
      updated_at = CURRENT_TIMESTAMP,
      #UPDATE_COLUMN_ASSIGNMENTS
      {
      { Name : instrument = excluded.instrument },
      { Figi : figi = excluded.figi },
      { SettleDate : txn_date = excluded.txn_date },
      { Currency : currency = excluded.currency },
      { Quantity : units = excluded.units },
      { Price : price = excluded.price }
      }

…assigns column values in the event of a conflict with Figi (if concurrent sessions perform actions that modify rows). Note the excluded table references values originally proposed for insertion.

#INSERT_COLUMN_CHANGED_FILTEROneApplies column assignments from #UPDATE_COLUMN_ASSIGNMENTS only where something has changed.

Configuring your SQL database

You can use Sql.Db.Mine to create a new database table if one does not already exist in the --connection location. For example:

@some_data = select 'AVIVA PLC' as Name, 'BBG000DCK9D2' as Figi, #2024-07-15# as SettleDate,
'GBP' as Currency, 300 as Quantity, 392 as Price;

@x = use Sql.Db.Mine with @some_data
      --provider=CustomProviders.RetrieveInternalTransactionData
      --type=SqLite
      --connection=MyDb
      --withWriter
      ----
-- <sql-for-reading-section>
    select #DISTINCT #SELECT_AGG {
      { Name: instrument },
      { Figi: figi },
      { SettleDate: txn_date},
      { Currency: currency },
      { Quantity: units },
      { Price: price },
      { Cost: price * units },
    }
    from my_database_table
    where
    #RESTRICT_AGG
    #GROUP_BY
    #LIMIT
      ----
-- <sql-for-writing-section>
    INSERT INTO my_database_table (#INSERT_COLUMN_NAMES {
        { Name : instrument },
        { SettleDate : txn_date },
        { Quantity : units },
        { Price : price }
      })
    VALUES (#INSERT_COLUMN_VALUES)
    ON CONFLICT (Figi)
      DO UPDATE SET
      updated_at = CURRENT_TIMESTAMP,
      #UPDATE_COLUMN_ASSIGNMENTS
      {
        { Name : instrument = excluded.instrument },
        { Figi : figi = excluded.figi },
        { SettleDate: txn_date = excluded.txn_date },
        { Currency : currency = excluded.currency },
        { Quantity : units = excluded.units },
        { Price : price = excluded.price }
      }
      #INSERT_COLUMN_CHANGED_FILTER 
      ----
-- <sql-for-configuring-section>
-- Optionally, create the `my_database_table` table itself if one does not already exist:

    CREATE TABLE IF NOT EXISTS my_database_table (
      instrument VARCHAR NOT NULL,
      figi VARCHAR UNIQUE,
      txn_date DATETIME NOT NULL,
      currency VARCHAR NOT NULL,
      units BIGINT NOT NULL,
      price NUMERIC NOT NULL,
      updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
      created_at DATETIME DEFAULT CURRENT_TIMESTAMP
    );

    enduse;

    select * from @x

…creates a table my_database_table with the columns instrument, figi, txn_date, currency, units, price, updated_at and created_at.

Note you must specify a <sql-for-reading-section> and <sql-for-writing-section> when configuring your database using Sql.Db.Mine. This means the query creates a table and also publishes:

  • A read provider, for example CustomProviders.RetrieveInternalTransactionData that users can query to read from my_database_table.
  • A write provider, for example CustomProviders.RetrieveInternalTransactionData.Writer that users can query to write to my_database_table.

Publishing your custom provider

Each time you run a query with the --provider option, a custom provider with the specified name is automatically created and made available in the Luminesce catalog (note you may need to click the highlighted Refresh button to see it):

Note: It is more performant to omit the --provider option while you are creating and testing your provider.

To grant end users the necessary permissions to query a custom provider, create a suitable access control policy and assign it to each LUSID end user.

Once published, an end user can write Luminesce SQL queries using any of the data fields published in the catalog, so for example:

select Name, Figi, Cost
from CustomProviders.RetrieveInternalTransactionData
where Currency = 'USD' and SettleDate > #2022-01-01#;

Routing your providers globally

After setting up and testing your providers as a single user, you can specify —routeAs=Global in your start-service command to make the database configuration global.

To make the switch from user to global, you may need to first delete existing certificates from your binary installation location. For a default Windows installation, this can be found in %USERPROFILE%\.dotnet\tools\.store\finbourne.luminesce.dbproviders.<*>\<version>\finbourne.luminesce.dbproviders.<*>\<version>\tools\net6.0\any:

Once any existing certificates have been deleted, the start-service command will automatically download the required domain-wide certificates when it next runs. For example, you could run:

C:\Users\JohnDoe\.dotnet\tools> luminesce-database-providers.exe --routeAs=Global --quiet --authClientDomain=<your-domain> --config "DataProvider:RoutingTypeGlobalMachineWhitelist=<your-device-name>"

Important: Setting the DataProvider:RoutingTypeGlobalMachineWhitelist config option to your device name explicitly grants permission for users with suitable access control permissions to access and pull data from files on your machine. Ensure you have set up the correct access control permissions before running the command.

Managing your live custom provider

Once published, you can manage a custom provider by running queries, for example:

  • To retrieve the original query used to create the view:
    select Sql, At
    from sys.logs.hcquery
    where Sql like '%Sql.Db%' and Sql like '%RetrieveInternalTransactionData%' and Sql not like '%HcQuery%' and ShowAll = true
    order by At desc limit 1
  • To examine the current content of the view:
    select Content from Sys.File where Name like '%RetrieveInternalTransactionData%' and Extension = '.sql'

Deleting your custom provider

To remove a custom provider from the Luminesce catalog (which means it will no longer be available for end users to query), run your query with the --deleteProvider option. For example:

@x = use Sql.Db.Mine
--provider=CustomProviders.RetrieveInternalTransactionData
--type=SqLite
--connection="Data Source=./Tests/TestData/sakila.sqlite;"
--deleteProvider
----
select 1 as deleting
enduse;
select * from @x

Note if using Oracle, you must supply a from for your select 1 as deleting statement. For example:

@x = use Sql.Db.Mine
--provider=CustomProviders.RetrieveInternalTransactionData
--type=Oracle
--connection="Data Source=<data-source>;"
--deleteProvider
----
select 1 as deleting from dual
enduse;
select * from @x

Appendix A: Allowing non-administrative users to download binaries

Once a license is granted, users with the lusid-administrator role have permission to download any SqlDb_Providers_* binary out-of-the-box.

To enable less privileged users to download, create a feature policy consisting of the following JSON, assign it to a role, and the role to their LUSID user. Make sure the code in the policy matches the download binary name:

So the code for PostgreSQL is highlighted in red:

{
    "description": "User can download PostgreSQL binary",
    "applications": [
        "Honeycomb"
    ],
    "grant": "Allow",
    "selectors": [
        {
            "idSelectorDefinition": {
                "identifier": {
                    "code": "SqlDb_Providers_Postgresql",
                    "scope": "Honeycomb"
                },
                "actions": [
                    {
                        "scope": "Honeycomb",
                        "activity": "DownloadBinary",
                        "entity": "Feature"
                    }
                ],
                "name": "PostgreSQL binary",
                "description": "User can download PostgreSQL binary"
            }
        }
    ],
    "when": {
        "activate": "2023-01-01T00:00:00.0000000+00:00",
        "deactivate": "9999-12-31T23:59:59.9999999+00:00"
    }
}

Appendix B: Syntax for writing to different SQL database types

The form required of your query to write to your SQL database can greatly vary by database type. The following table provides an example of the required SQL query form for each supported database type:

SQL database typeSQL query example
PostgreSQL
INSERT INTO my_table (#INSERT_COLUMN_NAMES)
           VALUES (#INSERT_COLUMN_VALUES)
           ON CONFLICT (name)
               DO UPDATE SET
               updated_at = CURRENT_TIMESTAMP,
               #UPDATE_COLUMN_ASSIGNMENTS
               {
               { name : name = excluded.name },
               { email : email = case
                   when (my_table.email like '%' || excluded.email || '%')
                   then my_table.email
                   else excluded.email || ';' || my_table.email
                   end },
               }
               WHERE
               #INSERT_COLUMN_CHANGED_FILTER
SQLite
INSERT INTO my_table (#INSERT_COLUMN_NAMES)
           VALUES (#INSERT_COLUMN_VALUES)
           ON CONFLICT (name)
    DO UPDATE SET
    #UPDATE_COLUMN_ASSIGNMENTS
       {
           { name : name = excluded.name },
           { email : email = excluded.email },
       }
    WHERE #INSERT_COLUMN_CHANGED_FILTER
DuckDb
INSERT INTO my_table (#INSERT_COLUMN_NAMES)
           VALUES (#INSERT_COLUMN_VALUES)
           ON CONFLICT (name)
    DO UPDATE SET
    #UPDATE_COLUMN_ASSIGNMENTS
       {
           { name : -- this line must be here to generate the correct SQL inserts as the pkey cannot be assigned in DuckDb },
           { email : email = excluded.email },
       }
    WHERE #INSERT_COLUMN_CHANGED_FILTER
Oracle
MERGE INTO my_table T
USING (SELECT #INSERT_COLUMN_VALUES FROM DUAL) X
    ON (T.name = X.name)
    WHEN MATCHED THEN
    UPDATE SET
    #UPDATE_COLUMN_ASSIGNMENTS
    {
    { name : -- a comment of any form as no assignment is allowed but this must be present or name is unknown },
    { email : T.email = X.email },
    }
    ,updated_at = CURRENT_TIMESTAMP
    WHEN NOT MATCHED THEN
    INSERT (#INSERT_COLUMN_NAMES, updated_at, created_at)
    VALUES (X.name, X.email, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP)

-----------------

-- This must be done manually once, then commented out as Oracle doesn't support CREATE TABLE IF NOT EXISTS
-- CREATE TABLE test_table_4 (
--     name varchar2(50) NOT NULL,
--     email varchar2(150) NOT NULL,
--     updated_at TIMESTAMP NOT NULL,
--     created_at TIMESTAMP NOT NULL,
-- )

enduse;

select * from @x
MySqlComing soon
OdbcComing soon
SqlServerComing soon
GoogleBigQueryComing soon
AwsAthenaComing soon
Db2Coming soon
SnowflakeComing soon