Configuration parameter AUTO_REPREPARE and Session environment option IFX_AUTO_REPREPARE have been updated
For both of these optimistic mode offers faster performance by not checking statements that successfully ran less than a second ago.
In the unlikely event that tables were modified in the interim, some -710 errors might occur.
Configuration parameter AUTO_REPREPARE - in addition to existing values, 3 new values are now supported:
Session environment option IFX_AUTO_REPREPARE - in addition to existing values, 3 new values are now supported:
You can upgrade a high-availability cluster to the next fix pack or interim update (PID) with minimal interruption to client applications.
During the rolling upgrade process, the cluster remains online even though the servers in the cluster are running on different levels of the software.
IBM Informix 12.10.xC5 software supports Java™ Platform Standard Edition (Java SE), Version 7.
Informix installation applications install IBM Runtime Environment, Java Technology Edition, Version 7 on most platforms by default.
That version is used to run Java user-defined routines that are created in the server.
IBM Runtime Environment, Java Technology Edition is supported for general use of the database server.
It is installed on most operating system platforms by default in the following directory: $INFORMIXDIR/extend/krakatoa/jre/.
MongoDB API and REST API access supports IBM Runtime Environment, Java Technology Edition, Version 7.
Informix does not support OpenJDK
The following products and components require a software development kit for Java, but one is not installed:
The software development kit used must be compatible with the supported Java runtime environment
The default installation and deployment log file is now /tmp/iad_act.log (UNIX, Linux, Mac OS X) or \tmp\iad_act.log (Windows).
You can specify a different name and location for the installation and deploymentlog file with the -DLOG_FILE= option in the installation command.
The -DDEBUG option is deprecated.
By default, tracing is disabled. You can set the tracing level 1 - 9 with the -DDEBUG_LEVEL= option in the installation command.
If tracing is enabled, the default name and location of the debugging file is /tmp/iad_dbg.log (UNIX, Linux, Mac OS X) or \tmp\iad_dbg.log (Windows).
You can set the name and location of the debug file with the -DDEBUG_FILE= option.
For the ids_install, installclientsdk, or installconnect command there is a new option -DOVERWRITE_PRODUCT=TRUE to overwrite an existing installation
Otherwise, the installation application exits if it finds an existing product installation.
For the ids_install, installclientsdk, or installconnect command the "-i swing" option is deprecated, use "-i gui" instead.
Use -DDEBUG_LEVEL= rather than -DDEBUG
When running the SQL admin commands admin() or task() with the "tenant create" or "tenant update" arguments there are additional option:
An Informix administrator can assign a tenant database to a tenant owner in the IBM OpenAdmin Tool (OAT) for Informix.
A tenant owner can access only the assigned database.
You can limit resources for all sessions that are owned by non-administrative users to prevent performance issues e.g. in embedded environments
The maximum value of the TAPEDEV and LTAPEDEV configuration parameters is now 9223372036854775807 KB, which is equivalent to 9 ZB.
The Informix JDBC Driver now supports the Estonian and Lithuanian locale, et_ee.
In a subquery, an aggregate expression with a column operand that was declared in a parent query block is called a correlated aggregate.
The column operand is called a correlated column reference. When a subquery contains a correlated aggregate with a correlated column reference, the database server now evaluates that aggregate in the parent query block where the correlated column reference was declared.
If the aggregate contains multiple correlated column references, the aggregate is processed in the parent query block (where the correlated column reference originated) that is the nearest parent to the subquery.
BSON columns can be created with the SQL CREATE TABLE statement and indexes created on these columns
BSON data can be manipulated in a collection that was created by a MongoDB API command.
BSON data can be viewed by casting to JSON format on using new BSON functions to convert BSON field values to standard SQL types e.g. INTEGER and LVARCHAR
The new BSON_GET and BSON_UPDATE functions can be used to operate on field-value pairs
High availability can be provided to MongoDB and REST clients by running a wire listener on each server in a Informix high-availability cluster.
High availability can also be provided between the wire listener and the Informix database server, connect the wire listener to the database server through the Connection Manager or specify an sqlhosts file in the url parameter in the wire listener properties file.
There are new or updated parameters in the wire listener properties file:
The wire listener now supports these types of queries:
You can control user authorization to Informix databases through the wire listener by locking and unlocking user accounts or individual databases with the new Informix JSON lockAccount and unlockAccounts commands.
Data that is entirely in JSON documents can be loaded into timeseries
Previously primary key values and timestamps had to be provided in plan text format
Use the new TSL_PutJson() function to load pure JSON documents.
JSON documents can be loaded from a file or pipe
As documented above
Files can now be quickly loaded by specifying a file path as the second argument to the TSL_Put function.
Previously, the TSL_Put function accepted data as only LVARCHAR or CLOB data types, which require intermediate steps to process the data.
The time series data that loaded with the TSL_Put function can now contain JSON or BSON documents as values for columns other than the primary key and time stamp columns.
Unlike the TSL_PutSQL function, when loading JSON columns with the TSL_Put function, an external does not need to be created from which to load the data.
If a loader program is used to load time series data, loader messages can be retrived from a queue instead of logging the messages in a message log file.
Retrieving messages from a queue results in less locking contention than logging messages in a file.
Queued messages can be retrieved as formatted message text in English by running the new TSL_GetFmtMessage function.
Alternatively the TSL_GetLogMessage function can be used to return message numbers and the new TSL_MessageSet function used to return the corresponding message text.
This method is useful to provide your own message text or to retrieve the message text on the client.
A new time series instance can be created whilst loading data with a time series loader program. Previously primary keys had to be inserted and a time series instance created before loading data with a loader program
A loader program can specify the definition of a time series instance by running the new TSL_SetNewTS function
The time series definition can be applied to the current loader session or to all loader sessions.
When loading data with the TSL_Put function for a new primary key value, a new row is added to the table and a new time series instance is created based on the definition.
For a virtual table, a new time series instance can be created whilst quickly inserting elements into containers.
In the TSCreateVirtualTab procedure, set the NewTimeSeries parameter and the elem_insert flag of the TSVTMode parameter.
The origin of any new time series instance can be automatically set to the day that the time series is created by including formatting directives for the year, month, and day.
Formatting directives for the origin in the time series input string can be included in an INSERT statement or in the NewTimeSeries parameter in the TSL_SetNewTS function and the TSCreateVirtualTab procedure.
The amount of storage space that is used by a time series can be found by running the new TSInfo function.
The level of detail can be customized e.g. display details about element pages, such as the number of pages, the number of bytes, the amount of free space, and the number of null pages.
Information can also be returned about the other properties of a time series, such as the origin, the type of values, and containers.
When a time series loader program is run, a loader session is opened for each table and TimeSeries column combination into which data is loaded.
A list of handles for active loader sessions can be retrieved by running the TSL_ActiveHandles function, the handle consists of the table name and the TimeSeries column name.
Time series data can be searched for matches to a specific pattern of values using the TSPatternMatch function
The margin of error and whether to search through consecutive sequences of values or through every possible subsequence of values can be specified
Data between 2 timepoints in a timeseries can be extracted and a new timeseries returned which contains only the specified columns of the original time series using the ProjectedClip function to clip time series data
The data loaded into a timeseries might be configured to store a null value when a value does not differ from the previous value
Null values can be replaced with the previous non-null values:
Moving objects such as a vehicle can be tracked capturing location information for the object at regular time intervals.
The new spatiotemporal search extension can be used to index the data and then query on either time or on location to determine the relationship of one to the other.
Queries can be run to find when an object was at a specific location or where an object was at a specified time.
The trajectory of a moving object over a range of time can also be found
The spatiotemporal search extension depends on the TimeSeries and spatial extensions.
The spatiotemporal data is stored in a TimeSeries data type with columns for longitude and latitude
The data can be indexed and queried with the new spatiotemporal search functions.
Spatiotemporal data can also be queried with time series and spatial routines.