30,000 is not really very many rows; your client should not be having problems with such a table size unless the rows themselves are huge, or the machine itself is small and weak. However, the following should help:
Jerry M. Denman (jerry@sherwood.com) adds: Be aware that you must have enough space to create a duplicate copy of the table when running the cluster index. It copies the table to a temporary table when ordering the rows and then drops the original and renames the temporary table to the same name as the original. Also, if the table has a large number of insert/delete statements, then the benefits of clustering are soon gone.
Some operating systems also provide mirroring, and so does some hardware, so which is best?
Joe Lumbley succinctly states:
Hardware mirroring is usually best, as it's faster. Next is HP-UX. Last comes OnLine mirroring. OnLine mirroring contains a little bit of logic regarding how to handle chunks that are down, but with any luck the HPUX or hardware mirroring will never let OnLine see that situation anyway.
Johnathan Leffler explains further:
I simply haven't yet heard a convincing explanation of why our software can do it better than the O/S can.
It does, of course, depend on the mirroring support from the O/S, and especially on a multi-CPU machine where the O/S I/O's are handled by a single CPU under the native mirroring system, DSA could have an advantage if it has multiple threads handling the writes in parallel. But I'm not convinced that O/S mirroring is that bad. It also depends on the intelligence or otherwise of the disk controllers. Etc.
Unless the O/S has screwed up badly, I don't think that the Informix mirroring provides much (if any) advantage.
I don't have any concrete evidence either way, and it is very difficult to determine experimentally. I know that there were once plans at one time to spend a day or two assessing the effect of LVMs (logical volume managers) on the performance of OnLine. I also know that it didn't happen -- I hope it was in part because I pointed out to the person who was asked to do the test that controlling the parameters of the test was going to be difficult, and was going to need considerably more than a day or two simply to work out what to test and how, independently of the time taken to create and load suitable data sets (mainly large ones) under multiple different configurations with differing amounts of RAID-ness, different numbers of controllers, different places where the mirroring occurs, different numbers of CPUs, different numbers of AIO threads, striping, etc.
So, yes, I think maybe you are being lead astray by listening to Informix marketing talk.
One of the claimed advantages for DSA disk handling is that it can selectively read from either the primary or the secondary of a mirrored pair -- so can the O/S mirrored systems, and here is evidence that at least one does precisely that:
From: johnbr@atl.hp.com (John Bria)
HP's Logical Volume Manager will allow you to "stripe" non-array disks by creating extents on a rotating basis across multiple drives. This may/may not be advantageous as you develop a fragmentation strategy.
If you use HP-UX mirroring, reads will be routed to the mirror copy if the primary is busy. Under heavy disk loads, this is very advantageous.
The head cuts things off when tbtape gets into the infinite "insert a tape and press any key to continue" thing. I have found that normal output is well less than 100 lines. This trick also seems to kill off the loose tbtape process too - but I'm not sure why. Of course, YMMV.
Martin Andrews (andrewm@ccfadm.eeg.ccf.org)
Note that recent releases of Informix engines have renamed the "tb*" utilities to "on*"
SELECT TRUNC(partnum/16777216) dbspace, COUNT(*) tables, SUM(nrows) tot_rows, SUM(nrows*rowsize) bytes FROM systables WHERE tabtype = 'T' GROUP BY 1 ORDER BY 1;If you add the 'dbspaces' table to your database and load it with dbspace names taken from tbstat -D output as I have done, then you can use:
SELECT dbs_name[1,12] dbspace, COUNT(*) tables, SUM(nrows) tot_rows, SUM(nrows*rowsize) bytes FROM systables, dbspaces WHERE tabtype = 'T' AND dbs_no = trunc(partnum/16777216) GROUP BY 1 ORDER BY 1;Sample output:
dbspace tables tot_rows bytes mcs_aaaaa 28 51 3715 mcs_catalog 22 2695 114810 mcs_eeeee 25 224 45446 mcs_fffff 32 1412 201445 mcs_mmmmm 35 165 262599 mcs_wwwww 28 449 79385 ("bytes" is data bytes, and does not include indexes and other overhead.)I separated the mcs system catalog files from the data tables by doing: create database mcs in mcs_catalog;
I created the data tables in the other five dbspaces by doing:
create table whatever ( ... ) in mcs_xxxxx;Other than the trick to get the dbspace number (thanks to Joe Glidden and others), this is all pretty straight-forward stuff. However, I hope my posting it may save someone some time.
alan@po.den.mmc.com (Alan Popiel)
We use the informix account ONLY for dba tasks, and not for all of those. You should have an informix account and informix group, both of these distinct from all other accounts. No one but user informix in group informix. If you're using SE, then informix doesn't even have to have a password, as you can do all you need to from the root account. In OnLine, informix has to do some things from the command line.
We have an "application" account which own all the non-system tables. It sets permissions on those tables, owns all the source, data, and executable directories and code. This keeps separate the functions of the data administrator and the engine administrator. We have a distinct dba function (and a person to do it.) Even if you don't yet, you can benefit from planning to be big if you EVER MIGHT get big.
There are grave security issues at stake when you start deviating from these guidelines. Having users in the informix group gives them the power to do things inside the INFORMIXDIR that you don't want. There are no pros worth the risks of having all users members of group informix.
We wrote a set of scripts which handle the tasks of setting, showing, revoking permissions for a list of tables for a given user. They are just loops which echo "revoke all on table tabname for user username" (or whatever) to isql. You can also manually diddle the systabauth table (as informix) to get user permissions set, but that's pretty manly.
Clem Akins (cwakins@leia.alloys.rmc.com)
Usually, in Unix, the command used to measure CPU time for a process is:
$ /bin/time test.4ge real 9.0 user 1.6 sys 1.4In this case, 1.6 + 1.4 is the total CPU time in seconds consumed by the program test.4ge. The 'real' time is not important.
However, in an Informix enviroment, the 'time' command is hiding something very important: The CPU time of the sqlturbo process. We know that in some situations (i.e. when we use a lot of stored procedures) the sqlturbo CPU time may be greater than the application CPU time.
In order to have a reliable CPU time we have to add the sqlturbo time to the application's time. Moreover, if we use a RUN sentence in the application, we have to add the CPU time of the commands triggered by the RUN statement.
Some OS (SCO Unix, Unix SVR4) have a command called 'timex'. This command can obtain the CPU time of a process and its children. Before we use 'timex' command we have to turn the accounting on.
To turn on the accounting, login as root and run:
OS COMMAND -------- ------------------------------------- SCO Unix $ /usr/lib/acct/accton /usr/adm/pacct Unix SVR4 $ /usr/lib/acct/accton /var/adm/pacct Others OS $ man acctWhen the accounting is on, you can execute the command 'timex'. Use the -p option (in order to obtain the children processes CPU time).
$ timex -p fglgo test.4gi ******** COMMAND START END REAL * CPU * CHARS BLOCKS NAME USER TTYNAME TIME TIME (SECS) *(SECS)*TRNSFD R/W fglgo ignacio ttyp0 17:35:05 17:35:05 0.59 * 3.02 * 26256 9 #sqlturbo ignacio ttyp0 17:35:05 17:35:05 0.56 * 4.26 * 32248 12 ********'timex' output has two rows: one for the application and one for the sqlturbo. If you use a RUN statement in the application, then the 'timex' output will have a third row for the command triggered by the RUN statement.
In order to obtain the total CPU time used, you must add the times in the CPU column.
Total_CPU_time = fglgo_time + sqlturbo_time = 3.02 secs + 4.26 secs ==>
Total_CPU_time = 7.28 secs
I use 'timex' when I want to compare the execution time of two versions of the same program. If you want to optimize a 4GL program (maybe using an insert cursor, or putting some stored procedures) 'timex' will tell you which version is better. Moreover, when I want to minimize the traffic on the pipe, I use the CHARS TRNSFD column in order to know how is the traffic between the application and the sqlturbo.
When you are using ISQL or DBACCESS to design/optimize your SELECTS, make sure when you run the select, you select OUTPUT/APPEND FILE /dev/null. Running ISQL/DBACCESS straight to the screen produces misleading times because ISQL/DBACCESS only selects a screenfull at a time.
I use OUTPUT/APPEND /dev/null since this produces the minimal amount of overhead (no writing to disk since it all dissapears down the plughole).
QUESTION:
After a full dbexport/dbimport Online's performance has degraded *significantly* - why? (Version 4.10.UE1)
ANSWER:
As an OnLine database grows, the number of extents will increase (fragmentation) and consequently the amount of time required by OnLine to access data increases. In later OnLine versions I think in 6.0+ this can be turned to some advantage (parallel queries) but in your version it just makes the system run sloooooowwww!!. The problem is exagerated when the system calalog tables (systables, syscolumns...) are effected.
The solution to fragmentation is to dbexport/dbimport the database and remember to control the <dbname>.sql file that dbexport produces so that your newly created/dbimported database has adequate next sizes defined for, in particular, the system catalog tables.
You can define where this control file goes by calling dbex/import using the -f <filname> option. If you're exporting to disk the file apears in your <dbname>.exp directory. Edit this file so that before any create table commands you...
ALTER TABLE systables MODIFY NEXT SIZE <nnn> ; ALTER TABLE syscolumns MODIFY NEXT SIZE <nnn> ; ALTER TABLE systabauth...Ascertain the required value for <nnn> by using the tbcheck -pT command for any of these sys... tables that appear in the tbcheck -ce report.
tbcheck -ce ( reports on tables having more than 8 extents ) tbcheck -pT <dbname>:<owner>.<tabname> to get Tablespace usage.Dbex/import, on your version, however have a drawback and that is that the full schema is not exported. What`s missing is locking information and extents information. This is easily overcome, in fact I have a script somewhere that automatically inserts this info into a <dbname>.sql file. I need to make it pretty and test it and then I will post it to the group. [see APPENDIX K Calculating extent sizes - Ed]
Running update statistics is always the first recommendation for improving performance, you could also check if any previous row level locking is intact, run tbcheck -ce and RTFM (the manuals for 4.x may not be brilliant but, in general the later ones are). At this point I should take the chance to recommend the excellent and concise...
Informix OnLine Performance Tuning by Elizabeth Suto Published by Prentice Hall ISBN 0-13-124322-5
Disclaimer: Of course, with fragmentation and CREATE TABLE...IN DBSPACE... statements this may not be completely true, but anyway:
Chuck Ludwigsen (cludwigs@hotmail.com) writes:Try this query against the sysmaster database:
select b.dbsname, a.name from sysdbspaces a, systabnames b where a.dbsnum= partdbsnum(b.partnum) and b.tabname="systables" and b.dbsname="yourdbname"
Just substitute for "yourdbname".
spal@scotch.den.csci.csc.com (Sujit Pal) writes:On 7.x instances, try:
SELECT DBINFO("DBSPACE", HEX(partnum)) FROM systables WHERE tabname = "systables"
On 5.x instances, try:
SELECT HEX(partnum) FROM systables WHERE tabname = "systables"
In the first 2 characters of the output (after the 0x) you will see the chunk number. Do a tbstat -d to get the dbspace name.
We did experience a lot of performance problems in the past. Almost all of them turned out to be network problems. Either network hardware problem or network configuration problem. One of our performance problem was resolved by installing a second CPU on a Sparc 20 server. One way to CONFIRM your network problem is to enter the following two commands at the server side:
arp -a netstat -rIf you don't have spontaneous response, you MUST have network problem. It sounds simple but takes us a long time to learn the above RULE. When the result from arp -a pauses for awhile and then display an ip address, you can be sure that is the troublesome node (I do'nt mean that particular node itself has problems.) When netstat -r pauses, you may have router problem (we led 224.0.0.0 multicast setting in one of the startup scripts for performance reason). If you want you should also check out "snoop" command.
If you are serious about performance tuning, please listen to me this time: DOWNLOAD SymBEL FROM SUN WWW. It is free and suggests ways to improve server performance. You will never regret to have this tool.
david@smooth1.co.uk (David Williams) writes:
DISCLAIMER: THESE MAY WELL CRASH YOUR ENGINE AND LEAVE YOU WITH CORRUPTION DISCLAIMER: THESE MAY WELL CRASH YOUR ENGINE AND LEAVE YOU WITH CORRUPTION DISCLAIMER: THESE MAY WELL CRASH YOUR ENGINE AND LEAVE YOU WITH CORRUPTION
Got your attention!! These are not things to try unless you have a lot of experience and confidence with Online and can be sure you can restore your data if something fails...Don't say I didn't warn you..
PC_POOLSIZE - An ONCONFIG parameter which sets the size of the stored procedure cache which is otherwise not configurable. Warning: Changing it can result in "Assert Failed: Internal Error - Segmentation Violation."
PC_HASHSIZE - Related to PC_POOLSIZE and must be a prime number
On 13th September 2000 kagel@bloomberg.net (Art S. Kagel) wrote:-
PC_HASHSIZE is related to PC_POOLSIZE, the former is the number of hash buckets and the latter is the number of entries permitted in each bucket.
Similar entries for DS_HASHSIZE & DS_POOLSIZE should be added to control the size of the data distribution cache if needed.
End of Art's response
QSTATS - An ONCONFIG parameter which can be set to 1 to enable queue statistics. Used to enable onstat -g qst.
WSTATS - An ONCONFIG parameter which can be set to 1 to enable wait statistics. Used to enable onstat -g wst.
In Online 7.2x run onmode -i 208 to that when an application encounters error -208 Online will Panic and shutdown leaving any shared memory dumps you requested.
Run online with oninit -v to get more debug messages produced as online startsup.
Environment variables to set before starting Online:-
KAIOOFF=1 - Disable KAIO.
KAIOON=1 - Enables KAIO on some platforms.
ASFDEBUG=1 - Stop the engine from shutting down if it panics. Instead it 'freezes' and onstat can still be used for analysis.
AIOTRACE=1 - Writes additional message to online.log
jguzman@transre.com (Juan R. Guzman) writes:-
Another one is if you are using DR and want to see what's going on between servers set DR_TRACE=1 and pipe oninit to a file.
pbonting@worldaccess.nl (Paul Bonting) writes:-
charlie.muntz@vallley.net (Charlie Muntz) wrote: <Online 7.1 on a SPARC/1000; Solaris 2.4 <Trying to create an index on an integer field in a fragmented table <(fragged on the int); If the table is approx 20K rows, engine wants about <50 TBLSPACES during index creation; <If the table is 200K rows, the engine wants about 400 TBLSPACES; <We need 15 million rows; <Engine initializes with TBLSPACES set at 500, but not with TBLSPACES set at <600. I have not tried any larger values. <The manual says nothing about (temporary) need for lots of TBLSPACES when <creating an index on a fragged table. <Any suggestions?
W're testing Online 7.1 on a Stratus FTX 2.3 platform. The sizes of our tables range between 1.5 and 30 million rows.
We encountered the same problem in the case of a fragmented table and a similar problem with an unfragmented table (create index cannot allocate shared memory).
For both problems we use the following workaround. We set the undocumented environment-variable NOSORTINDEX to true. (export NOSORTINDEX=true)
Both problems (index creation on fragmented and unfragmented tables) have been reported to our Informix support contact. We were informed that both problems have informix bugnumbers.
kagel@bloomberg.net (Art S. Kagel) writes:
If you are still having buffer wait problems If you still have BR problems you can try increasing BUFFERS also and then you have to use the undocumented ONCONFIG parameter LRUPRIORITY to alter how clients select LRU queues to reduce contention.
My only instructions from Informix on using these is that their use MAY help relieve LRU and buffer contention on very busy systems with a number of active sessions that approximate the number of LRUs and MAY also help with large numbers of users. I was told to "PLAY" with them and see what happens based on the descriptions.
These values can be mathematically OR'd to create values in the range 0-31 which can control all or part of the LRU selection and wait policies. A value of 0x11 would cause each session to initially always select the same LRU queue which MAY eliminate LRU contention when the number of sessions is not significantly larger than LRUS. If sessions < LRUS it will in effect make each LRU private for a session or two. The even values (2,4,8) will case a session to remain in wait on the selected LRU and not rehash to try to find a less hotly contended one.
On 1st June 2001 rbernste@alarismed.com (Bernstein, Rick) wrote:-
The proper environment variable is NOLRUPRIO.On 5th July 2001 ahamm@sanderson.net.au (Andrew Hamm) wrote:-
>One is the priority of index pages being set high and therefore >saturating the buffers. Index pages can be automatically degraded over >time by setting the environment variable LRUAGE=1. If you want, you can >revert back the the 7.30 algorithm by setting the environment variable >NOLRUPRIO=1. I was not aware that this was fixed in 7.31.anything. >
We've recently had a situation where NOLRUPRIO crippled an engine. Informix tech support then said that NOLRUPRIO was buggy and defective, and even worse, does the opposite work to LRUAGE and they will fight each other quite bitterly if you enable both. Stick with LRUAGE was the advice.
On 16th october 2000 aef@mfs.misys.co.uk (Tony Flaherty) wrote:-
I was told by an engineer from Informix a day or two ago that the SHMBASE 0x0 results in unreadable shared memory dumps for HPUX users as all of the internal pointers are screwed up in the dump file.
On 27th october 2000 bogdan.neagu@alcatel.com (Bogdan Neagu) wrote:-
Try onstat -g dis
On 4th April 2003 stefan@weideneder.de (Stefan Weideneder) wrote:-
Set the ONCONFIG parameter WSTATS to 1.Possibly you must add the parameter to the end of your config file.
Re-Start the engine and monitor the system during peaks.
onstat -z
sleep 1200 # or simply wait 20 minutes
dbaccess sysmaster < 1000
and reason != 'condition'
group by 2
order by 1 desc;
eof
You will get some information about internal waits in microseconds. It's not the amount of waits, but the amount of time, threads were waiting for an event.
Reason | Description |
---|---|
aio | waits for handling AIO-requests |
lock | waits for release of locks |
mt ready | waits inside ready queue |
checkpoint | waits for completion of checkpoints |
buffer | waits for buffer access |
running | that's not a wait - it's the running time |
Has anyone used SKINHIBIT=1 in a tbconfig file for 5.10 online. Informix Tech support says this turns off "smart disk".
On 18th September 2001 Glyn.Balmer@pirelli.com (Glyn Balmer) wrote:-
On ICL (now Fujitsu) DRS6000 hardware there was an optional piece of hardware called SCAFS (Son of CAFS(Contents Addressable File Store)). (CAFS was the "mainframe" VME version). This was an extra board with on onboard processor that did the majority of the donkeywork on complex queries before returning the recordset to the calling application. We had SCAFS on our DRS6000 which at the time was running Informix-OnLine 5.01. We did experiment with turning it on and off (using SET EXPLAIN ON in an attempt to determine the effect thereof) but I really cannot remember whether there was significant effect in having it switched off. I suspect that as the software/hardware cost some £6k (in 1991!) we elected to keep it switched on.
On 20th December 2001 ahamm@sanderson.net.au (Andrew Hamm) wrote:-
Dump of the dictionary:
$ onstat -g dic cdr_deltab_000001
Informix Dynamic Server Version 7.31.FD1A -- On-Line -- Up 1 days
05:49:04 -- 957440 Kbytes
Dictionary entry for table: cdr_deltab_000001 [hashes to list#: 10]
ddt_tabfullname: v4prod@webprod_tcp:informix.cdr_deltab_000001 ddt_partnum:
23068731
ddt_fextsize: 16 ddt_nextsize: 16 ddt_locklevel: 2
ddt_flag: -2147483648 ddt_flag2: 0 ddt_ps: 0
ddt_row: 1
36ede438
ddt_altcount: 37 ddt_ncols: 11
ddt_rowsize: 153 ddt_nallidxs: 1 ddt_nindexes: 0
ddt_type: T
ddt_nrows: 10 ddt_npused: 1 ddt_tabid: 183
ddt_majversion: 183 ddt_minversion: 2 ddt_perms: 136edec90
Table Permissions:
Userthread has
Userthread has
Userthread has
ddt_cols: 136ede838 ddt_indexes: 136ede568 ddt_uniq:
136eded58
ddt_ref: 0 ddt_check: 136edeeb8 ddt_dummytab:
136ede5d8
ddt_nopcls: 0 ddt_opcls: 0 ddt_numreftabs: 0
ddt_reftabs: 0 ddt_next: 136e48050 ddt_prev:
1361bc450
ddt_refcount: 0 ddt_frags: 0Column Descriptors:
ddc_name: cdrserver ddc_colno: 1 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 0
ddc_len: 4 ddc_nunique: 0 ddc_next: 136ede898
ddc_name: cdrtime ddc_colno: 2 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 4
ddc_len: 4 ddc_nunique: 0 ddc_next: 136ede8f8
ddc_name: cust_code ddc_colno: 3 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 8
ddc_len: 6 ddc_nunique: 0 ddc_next: 136ede958
ddc_name: user_id ddc_colno: 4 ddc_default: 0
ddc_flags: 8388608 ddc_type: 8388608 ddc_start: 14
ddc_len: 20 ddc_nunique: 10 ddc_next: 136ede9b8
ddc_name: password ddc_colno: 5 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 34
ddc_len: 32 ddc_nunique: 0 ddc_next: 136edea18
ddc_name: security_level ddc_colno: 6 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 66
ddc_len: 2 ddc_nunique: 0 ddc_next: 136edea78
ddc_name: create_date ddc_colno: 7 ddc_default: 0
dc_flags: 0 ddc_type: 0 ddc_start: 68
ddc_len: 4 ddc_nunique: 0 ddc_next: 136edead8
ddc_name: private_email ddc_colno: 8 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 72
ddc_len: 70 ddc_nunique: 0 ddc_next: 136edeb38
ddc_name: expiry_date ddc_colno: 9 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 142
ddc_len: 4 ddc_nunique: 0 ddc_next: 136edeb98
ddc_name: status ddc_colno: 10 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 146
ddc_len: 1 ddc_nunique: 0 ddc_next: 136edebf8
ddc_name: fgn_id ddc_colno: 11 ddc_default: 0
ddc_flags: 0 ddc_type: 0 ddc_start: 147
ddc_len: 6 ddc_nunique: 0 ddc_next: 0
Index Descriptors:
ddil_name: 183_83 ddil_keylen: 20 ddil_flags: 8
ddil_next: 0 ddil_colno: { 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 }
ddil_count: 1 ddil_frags: 0
ddil_state: D
Referential Constraints:
Unique Constraints:
ddc_constrid: 83 ddc_owner: informix ddc_name:
u183_83
ddc_type: P ddc_flags: 0 ddc_uindex: 136ede568
ddc_urefc: 0 ddc_selfrefcnt: 0
ddc_colnml: { user_id }
ddc_state: D
Check Constraints:
ddc_constrid: 83 ddc_owner: informix ddc_name:
u183_83
ddc_type: C ddc_flags: 32 ddc_checkexp: 136edee70
ddc_checktxt: ddc_txtsize: 0
ddc_colnml: { }
ddc_state: D
Triggers:
1 entries found.
ddt_viotid: 0
ddt_diatid: 0
On 13th May 2005 david@smooth1.co.uk (David Williams) wrote:-
I have confirmed that this works under IDS 10.00.TC1TL under Windows even though the "onstat -" output gives " dic Print dictionary cache information" with no mention of using a table name
Trying to supply partnums, hex partnums, database:table formats does not work. If the same table name appears in the dictionary cache (from different databases) then all entries are display and the message at the end indicates how many tables were displayed. NOTE: Each entry starts with ddt_tabfullname that shows which database contains the table.
The environment variable CDR_GRPCMPTRS
On 27th March 2003 mpruet@attbi.com (Madison Pruet) wrote:-
Nope --- CDR_GRPCMPTRS sands for CDR_Grouper_ClarenceMadisonPruetTransactionSize. (Seriously)
This it the number of rows that must be in the transaction before the quick compression kicks in. If you were advised to use it, it was probably because of a bug with the quick compression in the early 7.31 releases. If I advised you to use it, then I'll bet that the bug has long since been patched.
KAIO (Kernel Asyncronous I/O) is an Online 7.x feature and will be enabled on most platforms if: a) Your platform supports it (check your release notes under $INFORMIXDIR/release somewhere) b) You are using raw partitions. On certain platforms you have to perform additional steps:- HP-UX On 21st Dec 1997 ecstahl@aol.com (Eric Stahl) wrote:- The release notes should cover... 1- Informix down 2- Install the special device driver: sam; kernel cfg; drivers; select asyncdsk; actions; add driver to kernel; create a new kernel; move kernel into place and continue shutdown [reboot] now... 3- Create the async special device file: mknod /dev/async c 101 1 4- Temporarily set the KAIO environment variable. It will need to be in a startup script: export KAIOON=1 5- Start Informix 6- Check for KAIO threads: onstat -g ath (Thanks to K.Gotthardt@em.uni-frankfurt.de (Klaus Gotthardt) for this) :- AIX 4.1.5 /usr/sbin/mkdev -l aio0 You have to do this every time the computer boots or simply place an entry in the /etc/inittab file, like this: kaio:2:wait:/usr/sbin/mkdev -l aio0
danny@uk.ibm.com (Daniel Williams) writes:-
Carlos (and anyone else who's intersted)
Here's the procedure for performing a warm restore of a DBSpace using ON-Bar. It works - I have used it extensively in testing.
ennis@ssax.com (Bill Ennis) writes:-
Wow, what a coincidence! I just read a page on this in the Performance Tuning Training manual.
Queries NOT parallelized:
Informix has a program called archecker which is available upon request. It was put together by Advanced Support and will (hopefully) be made generally available.
kagel@bloomberg.com (Art S. Kagel) writes:-
Contact Informix's Advanced Technology Group, or have French sales do so. They can get it for you. The best solution, however, is to upgrade to versions 5.08 or 7.14 and higher. The bug, which apparently has existed for years in all earlier versions caused scattered pages to not be archived if there was sufficiently heavy update activity on the engine during the backup. This was finally fixed in 5.08+ and 7.14+.
For logical log tapes: On 11th Jun 1997 J.Clutterbuck wrote a logical log tape validator for Online 5.x. This is written in perl5 and available as APPENDIX P Logical Log Tape Validator for Online 5.x
On Dec 3 mdstock@informix.com (Mark D. Stock) wrote:
You can get this information from SMI as follows:
SELECT uniqid, (used/size*100) FROM sysmaster:syslogs WHERE uniqid >= ( SELECT MIN(tx_loguniq) FROM sysmaster:systrans WHERE tx_loguniq > 0 ) UNION SELECT uniqid, 0.00 FROM sysmaster:syslogs WHERE uniqid < ( SELECT MIN(tx_loguniq) FROM sysmaster:systrans WHERE tx_loguniq > 0 )
On 11th Nov 1998 jayallen@auragen.com (Jay Allen) wrote:-
I just double-checked this in INFORMIX UNLEASHED and it says that the NT version of OWS (or whatever the hell it's called now) comes bundled with a storage manager. onbar on NT uses this storage manager as its default XBSA program.
On 8th Jan 1999 david@smooth1.co.uk (David Williams) wrote:-
Note Online 7.30.UC5 comes with ISM (Informix Storage Manager) which can handle simple tape devices (i.e. not jukeboxs).
On 27th Feb 1998 clem@informix.com (Clem Akins) wrote :-
The Informix Dynamic Server v 7.3 (no longer called OnLine) will have a version of Legato's NetWorker Server product bundled. This product has an interface called Informix Storage Manager (ISM) that works either via a GUI or a command line.
ISM comes with several restrictions, designed to keep it from competing with the Legato full product. Among them:
From the experience I've had with ISM (not extensive) the product looks like a really good archive system. It is easy to use and well suited for installations that want an out-of-the-box solution. Applications that are more sophisticated will require a full-featured suite of products, such as Legato or OmniBack as well as the hardware to support them.
If you already have Legato installed, then *do not* install the ISM. It is a subset of Legato, and will overwrite some important configuration and backup history files. ISM contains an interface onto the Legato product and still uses onbar for the actual tape management, via the XBSA interface.
On 27th Feb 1998 dbaresrc@xmission.xmission.com (Carlton Doe) wrote :-
I have been trying to use Veritas' NetBackup and it has been a complete failure. Veritas has been trying to figure out why but . . . . .
In the 7.3 releases (NT and Unix), Legato Light will be bundled and called the Informix Storage Manager (ISM). Legato, as a product, has the highest amount of integration with Informix, their light product not withstanding. Following behind is ADSM and OmniBack.
Informix also does not "certify" OnBar products which may be why Veritas' product is not working correctly. They only "certify" that the data delivered to the XBSA layer is as it should be and that they (Informix) correctly processes the required return messages from the Storage Management product.
What the SM product does once it receives the data is out of Informix's hands. They can/do not test the SM product as they (Informix) does not have the SM's source code. In other words Buyer Beware.
You need to test like crazy including the basic stuff like "onbar -bw; oninit -i; onbar -rw" and see if the instance comes back.
On 22nd Dec 1997 jeffl@etcscan.com (Jeff Lanham) wrote:-
I got this one from Informix Support. It's slightly more helpful.
SELECT p.sid, username, tty, seqscans, dsksorts, total_sorts from
syssesprof p, syssessions s WHERE p.sid = s.sid;