Join IIUG
 for   
 

Informix News
18 Nov 13 - ZDNet - Top 20 mobile skills in demand... Read
09 Sep 13 - telecompaper - Shaspa and Tatung have shown a new smart home platform at Ifa in Berlin. Powered by the IBM Informix software... Read
06 Sep 13 - IBM data magazine - Mission Accomplished - Miami, Florida will be the backdrop for the 2014 IIUG Informix Conference... Read
01 Feb 13 - IBM Data Magazine - Are your database backups safe? Lester Knutsen (IBM Champion) writes about database back up safety using "archecker"... Read
14 Nov 12 - IBM - IBM's Big Data For Smart Grid Goes Live In Texas... Read
3 Oct 12 - The Financial - IBM and TransWorks Collaborate to Help Louisiana-Pacific Corporation Achieve Supply Chain Efficiency... Read
28 Aug 12 - techCLOUD9 - Splunk kicks up a SaaS Storm... Read
10 Aug 12 - businessCLOUD9 - Is this the other half of Cloud monitoring?... Read
3 Aug 12 - IBM data management - Supercharging the data warehouse while keeping costs down IBM Informix Warehouse Accelerator (IWA) delivers superior performance for in-memory analytics processing... Read
2 Aug 12 - channelbiz - Oninit Group launches Pay Per Pulse cloud-based service... Read
28 May 12 - Bloor - David Norfolk on the recent Informix benchmark "pretty impressive results"... Read
23 May 12 - DBTA - Informix Genero: A Way to Modernize Informix 4GL Applications... Read
9 Apr 12 - Mastering Data Management - Upping the Informix Ante: Advanced Data Tools... Read
22 Mar 12 - developerWorks - Optimizing Informix database access... Read
14 Mar 12 - BernieSpang.com - International Informix User Group set to meet in San Diego... Read
1 Mar 12 - IBM Data Management - IIUG Heads West for 2012 - Get ready for sun and sand in San Diego... Read
1 Mar 12 - IBM Data Management - Running Informix on Solid-State Drives.Speed Up Database Access... Read
26 Feb 12 - BernieSpan.com - Better results, lower cost for a broad set of new IBM clients and partners... Read
24 Feb 12 - developerWorks - Informix Warehouse Accelerator: Continuous Acceleration during Data Refresh... Read
6 Feb 12 - PRLOG - Informix port delivers unlimited database scalability for popular SaaS application ... Read
2 Feb 12 - developerWorks - Loading data with the IBM Informix TimeSeries Plug-in for Data Studio... Read
1 Feb 12 - developerWorks - 100 Tech Tips, #47: Log-in to Fix Central... Read
13 Jan 12 - MC Press online - Informix Dynamic Server Entices New Users with Free Production Edition ... Read
11 Jan 12 - Computerworld - Ecologic Analytics and Landis+Gyr -- Suitors Decide to Tie the Knot... Read
9 Jan 12 - planetIDS.com - DNS impact on Informix / Impacto do DNS no Informix... Read
8 Sep 11 - TMCnet.com - IBM Offers Database Solution to Enable Smart Meter Data Capture... Read
1 Aug 11 - IBM Data Management Magazine - IIUG user view: Happy 10th anniversary to IBM and Informix... Read
8 Jul 11 - Database Trends and Applications - Managing Time Series Data with Informix... Read
31 May 11 - Smart Grid - The meter data management pitfall utilities are overlooking... Read
27 May 11 - IBM Data Management Magazine - IIUG user view: Big data, big time ( Series data, warehouse acceleration, and 4GLs )... Read
16 May 11 - Business Wire - HiT Software Announces DBMoto for Enterprise Integration, Adds Informix. Log-based Change Data Capture... Read
21 Mar 11 - Yahoo! Finance - IBM and Cable&Wireless Worldwide Announce UK Smart Energy Cloud... Read
14 Mar 11 - MarketWatch - Fuzzy Logix and IBM Unveil In-Database Analytics for IBM Informix... Read
11 Mar 11 - InvestorPlace - It's Time to Give IBM Props: How many tech stocks are up 53% since the dot-com boom?... Read
9 Mar 11 - DBTA - Database Administration and the Goal of Diminishing Downtime... Read
2 Feb 11 - DBTAs - Informix 11.7 Flexible Grid Provides a Different Way of Looking at Database Servers... Read
27 Jan 11 - exactsolutions - Exact to Add Informix Support to Database Replay, SQL Monitoring Solutions... Read
25 Jan 11 - PR Newswire - Bank of China in the UK Works With IBM to Become a Smarter, Greener Bank... Read
12 Oct 10 - Database Trends and Applications - Informix 11.7: The Beginning of the Next Decade of IBM Informix... Read
20 Sep 10 - planetIDS.com - ITG analyst paper: Cost/Benefit case for IBM Informix as compared to Microsoft SQL Server... Read
20 Jul 10 - IBM Announcements - IBM Informix Choice Edition V11.50 helps deploy low-cost scalable and reliable solutions for Apple Macintosh and Microsoft Windows... Read
20 Jul 10 - IBM Announcements - Software withdrawal: Elite Support for Informix Ultimate-C Edition... Read
24 May 10 - eWeek Europe - IBM Supplies Database Tech For EU Smart Grid... Read
23 May 10 - SiliconIndia - IBM's smart metering system allows wise use of energy... Read
21 May 10 - CNET - IBM to help people monitor energy use... Read
20 May 10 - ebiz - IBM Teams With Hildebrand To Bring Smart Metering To Homes Across Britain... Read
19 May 10 - The New Blog Times - Misurare il consumo energetico: DEHEMS è pronto... Read
19 May 10 - ZDNet - IBM software in your home? Pact enables five-city smart meter pilot in Europe... Read
17 March 10 - ZDNet (blog) David Morgenstern - TCO: New research finds Macs in the enterprise easier, cheaper to manage than... Read
17 March 2010 - Virtualization Review - ...key components of Big Blue's platform to the commercial cloud such as its WebSphere suite of application ser vers and its DB2 and Informix databases... Read
10 February 2010 - The Wall Street Journal - International Business Machines is expanding an initiative to win over students and professors on its products. How do they lure the college crowd?... Read


End of Support Dates

IIUG on Facebook IIUG on Twitter

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum

RE: Informix Lockup (Wierd)

Posted By: Phillips, Rob
Date: Thursday, 30 January 2003, at 4:57 p.m.

In Response To: Informix Lockup (Wierd) (Phillips, Rob)

Ops my bad... the updatestats.sh isn't scheduled to run every hour... only
at 1am. This script is run every 2 hours and both recent lockups have
happened durring the execution of this script.

#!/bin/ksh
#################################
# Informix "Check It" Script
# Elisa D. Hix, Informix
# 08/23/97
#################################

INFORMIXDIR=/informix
ONCONFIG=onconfig.ows
INFORMIXSERVER=ol_cea_shm
INFORMIXSQLHOSTS=/informix/etc/sqlhosts
TERM=vt100
TERMCAP=/informix/etc/termcap
PATH=.:/informix/bin:/informix/bin:.:/usr/ccs/bin:/usr/ccs/lib:/bin:/usr/bin
:/usr/sbin:/usr/etc:/usr/ucb

export INFORMIXDIR INFORMIXSERVER ONCONFIG INFORMIXSQLHOSTS PATH TERM
TERMCAP

## Date
echo
"------------------------------------------------------------------------"
>> /informix/logs/check_it.log
date >> /informix/logs/check_it.log

## Init
exitcode=0

## Check for "On-Line" Mode
checkit=""
checkit=`/informix/bin/onstat - | grep " On-Line "`
if [ -z "$checkit" ] ; then
echo "Informix is down (`date`)." | mail informix
echo "Informix is down (`date`)." >> /informix/logs/check_it.log
exitcode=1
fi

## Check for logs not backed up
checkit=""
checkit=`/informix/bin/onstat -l | grep "U-B" | wc -l`
if [ "$checkit" -ne 19 ] ; then
echo "Informix log backup behind (`date`)." | mail informix
echo "Informix log backup behind (`date`)." >>
/informix/logs/check_it.log
exitcode=1
fi

## Check for dynamically allocated memory segments
checkit=""
checkit=`/informix/bin/onstat -g seg | wc -l`
if [ "$checkit" -ne 12 ] ; then
echo "Informix has allocated more memory segments (`date`). Run
onmode -F to unallocate when memory is free." | mail informix
echo "Informix has allocated more memory segments (`date`). Run
onmode -F to unallocate when memory is free." >> /informix/logs/check_it.log
exitcode=1
fi

## Check for message log errors
checkit=""
#-cmb checkit=`tail -100 /informix/logs/online.log | grep "err = -"`
## changed to check a full days log -cmb
checkit=`grep "err = -" /informix/logs/online.log`
if [ -n "$checkit" ] ; then
echo "Informix online.log has errors (`date`)." | mail informix
echo "Informix online.log has errors (`date`)." >>
/informix/logs/check_it.log
exitcode=1
fi

## Check for tables with extents > 3 in cea database
checkit=""
#-cmb checkit=`/informix/bin/dbaccess sysmaster extent_test.sql 1>/dev/null
2>&1`
#-cmb changed the above line. it was always setting checkit to null
#-cmb
#-cmb checkit=`/informix/bin/dbaccess sysmaster extent_test.sql 1>/dev/null
2>&1`
#-cmb changed the above line. it was always setting checkit to null
#-cmb also modified the extent_test.sql file
#-cmb
#-cmb checkit=`/informix/bin/dbaccess sysmaster extent_test.sql 2>/dev/null`
#-cmb
/informix/bin/dbaccess sysmaster extent_test.sql 2>/dev/null 1> checkitF
checkit=`wc -l checkitF|awk '{print $1}'`
rm checkitF
#-cmb if [ -n "$checkit" ] ; then
if [ $checkit > 2 ] ; then
echo "Informix found tables with multiple extents (`date`). Run
extent_test.sql in /informix/scripts for table detail." | mail informix
echo "Informix found tables with multiple extents (`date`). Run
extent_test.sql in /informix/scripts for table detail." >>
/informix/logs/check_it.log
exitcode=1
fi

## Check for dbspace usage ( 4000 Kb threshold / 1000 pages )
DISKFULL=1000
TOTFREE=0
#-cmb /informix/bin/onstat -d | grep " 6" | awk '{print $2" "$3" "$6"
"$8}' |
/informix/bin/onstat -d | awk 'NR>35 && NR<67 {print $2" "$3" "$6" "$8}' |
while read CHK DBS FREE DBSPACE
do
if [ "$FREE" = "N" ] ; then
FREE=0
fi

TOTFREE=`expr ${TOTFREE} + ${FREE}`
done
if [ "$TOTFREE" -le $DISKFULL ] ; then
echo "Informix dbspace defndbs ($CHK $DBS $DBSPACE $TOTFREE
$DISKFULL) is approaching full (`date`)." | mail informix
echo "Informix dbspace defndbs ($2 $3 $6 $8 $CHK $DBS $DBSPACE
$TOTFREE $DISKFULL) is approaching full (`date`)." >>
/informix/logs/check_it.log
#-cmb exitcode=1
fi

#
## replaced the entire code to the end of the script
## with the following code.
## If any dbspace is less than 1000 pages send a email to informix and root
#-cmb

/informix/bin/onstat -d | awk 'NR>35 && NR<67 {print $2" "$3" "$5" "$6}' |
while read CHK DBS ALLOC FREE
do
case $DBS in
1) dbs1="rootdbs"
let dbs1F=$dbs1F+$FREE
let dbs1A=$dbs1A+$ALLOC
;;
2) dbs2="dbsaccount"
let dbs2F=$dbs2F+$FREE
let dbs2A=$dbs2A+$ALLOC
;;
3) dbs3="dbsbilling"
let dbs3F=$dbs3F+$FREE
let dbs3A=$dbs3A+$ALLOC
;;
4) dbs4="dbsclaim"
let dbs4F=$dbs4F+$FREE
let dbs4A=$dbs4A+$ALLOC
;;
5) dbs5="dbsindex"
let dbs5F=$dbs5F+$FREE
let dbs5A=$dbs5A+$ALLOC
;;
6) dbs6="dbslogical"
let dbs6F=$dbs6F+$FREE
let dbs6A=$dbs6A+$ALLOC
;;
7) dbs7="dbspfarch"
let dbs7F=$dbs7F+$FREE
let dbs7A=$dbs7A+$ALLOC
;;
8) dbs8="dbsphysical"
let dbs8F=$dbs8F+$FREE
let dbs8A=$dbs8A+$ALLOC
;;
9) dbs9="dbsremark"
let dbs9F=$dbs9F+$FREE
let dbs9A=$dbs9A+$ALLOC
;;
10) dbs10="dbsstatic"
let dbs10F=$dbs10F+$FREE
let dbs10A=$dbs10A+$ALLOC
;;
11) dbs11="dbstmp"
let dbs11F=$dbs11F+$FREE
let dbs11A=$dbs11A+$ALLOC
;;
12) dbs12="dbstmp1"
let dbs12F=$dbs12F+$FREE
let dbs12A=$dbs12A+$ALLOC
;;
13) dbs13="dbstmp2"
let dbs13F=$dbs13F+$FREE
let dbs13A=$dbs13A+$ALLOC
;;
14) dbs14="remark_dbs"
let dbs14F=$dbs14F+$FREE
let dbs14A=$dbs14A+$ALLOC
;;
15) dbs15="data2_dbs2"
let dbs15F=$dbs15F+$FREE
let dbs15A=$dbs15A+$ALLOC
;;
16) dbs16="data2_dbs3"
let dbs16F=$dbs16F+$FREE
let dbs16A=$dbs16A+$ALLOC
;;
17) dbs17="data2_dbs4"
let dbs17F=$dbs17F+$FREE
let dbs17A=$dbs17A+$ALLOC
;;
18) dbs18="data2_dbs5"
let dbs18F=$dbs18F+$FREE
let dbs18A=$dbs18A+$ALLOC
;;
19) dbs19="data2_dbs6"
let dbs19F=$dbs19F+$FREE
let dbs19A=$dbs19A+$ALLOC
;;
21) dbs21="data2_dbs8"
let dbs21F=$dbs21F+$FREE
let dbs21A=$dbs21A+$ALLOC
;;
22) dbs22="data2_dbs9"
let dbs22F=$dbs22F+$FREE
let dbs22A=$dbs22A+$ALLOC
;;
23) dbs23="data2_dbs10"
let dbs23F=$dbs23F+$FREE
let dbs23A=$dbs23A+$ALLOC
;;
24) dbs24="data2_dbs11"
let dbs24F=$dbs24F+$FREE
let dbs24A=$dbs24A+$ALLOC
;;
25) dbs25="data2_dbs12"
let dbs25F=$dbs25F+$FREE
let dbs25A=$dbs25A+$ALLOC
;;
26) dbs26="data2_dbs13"
let dbs26F=$dbs26F+$FREE
let dbs26A=$dbs26A+$ALLOC
;;
27) dbs27="data2_dbs14"
let dbs27F=$dbs27F+$FREE
let dbs27A=$dbs27A+$ALLOC
;;
esac
done

if [ $dbs1F -lt 1000 ]
then
echo "$dbs1 has less than 1000 pages left, ALLOC: $dbs1A, FREE: $dbs1F"
|
mail informix root
fi
if [ $dbs2F -lt 1000 ]
then
echo "$dbs2 has less than 1000 pages left, ALLOC: $dbs2A, FREE: $dbs2F"
|
mail informix root
fi
if [ $dbs3F -lt 1000 ]
then
echo "$dbs3 has less than 1000 pages left, ALLOC: $dbs3A, FREE: $dbs3F"
|
mail informix root
fi
if [ $dbs4F -lt 1000 ]
then
echo "$dbs4 has less than 1000 pages left, ALLOC: $dbs4A, FREE: $dbs4F"
|
mail informix root
fi
if [ $dbs5F -lt 1000 ]
then
echo "$dbs5 has less than 1000 pages left, ALLOC: $dbs5A, FREE: $dbs5F"
|
mail informix root
fi
if [ $dbs6F -lt 1000 ]
then
echo "$dbs6 has less than 1000 pages left, ALLOC: $dbs6A, FREE: $dbs6F"
|
mail informix root
fi
if [ $dbs7F -lt 1000 ]
then
echo "$dbs7 has less than 1000 pages left, ALLOC: $dbs7A, FREE: $dbs7F"
|
mail informix root
fi
if [ $dbs8F -lt 1000 ]
then
echo "$dbs8 has less than 1000 pages left, ALLOC: $dbs8A, FREE: $dbs8F"
|
mail informix root
fi
if [ $dbs9F -lt 1000 ]
then
echo "$dbs9 has less than 1000 pages left, ALLOC: $dbs9A, FREE: $dbs9F"
|
mail informix root
fi
if [ $dbs10F -lt 1000 ]
then
echo "$dbs10 has less than 1000 pages left, ALLOC: $dbs10A, FREE:
$dbs10F" |
mail informix root
fi
if [ $dbs11F -lt 1000 ]
then
echo "$dbs11 has less than 1000 pages left, ALLOC: $dbs11A, FREE:
$dbs11F" |
mail informix root
fi
if [ $dbs12F -lt 1000 ]
then
echo "$dbs12 has less than 1000 pages left, ALLOC: $dbs12A, FREE:
$dbs12F" |
mail informix root
fi
if [ $dbs13F -lt 1000 ]
then
echo "$dbs13 has less than 1000 pages left, ALLOC: $dbs13A, FREE:
$dbs13F" |
mail informix root
fi
if [ $dbs14F -lt 1000 ]
then
echo "$dbs14 has less than 1000 pages left, ALLOC: $dbs14A, FREE:
$dbs14F" |
mail informix root
fi
if [ $dbs15F -lt 1000 ]
then
echo "$dbs15 has less than 1000 pages left, ALLOC: $dbs15A, FREE:
$dbs15F" |
mail informix root
fi
if [ $dbs16F -lt 1000 ]
then
echo "$dbs16 has less than 1000 pages left, ALLOC: $dbs16A, FREE:
$dbs16F" |
mail informix root
fi
if [ $dbs17F -lt 1000 ]
then
echo "$dbs17 has less than 1000 pages left, ALLOC: $dbs17A, FREE:
$dbs17F" |
mail informix root
fi
if [ $dbs18F -lt 1000 ]
then
echo "$dbs18 has less than 1000 pages left, ALLOC: $dbs18A, FREE:
$dbs18F" |
mail informix root
fi
if [ $dbs19F -lt 1000 ]
then
echo "$dbs19 has less than 1000 pages left, ALLOC: $dbs19A, FREE:
$dbs19F" |
mail informix root
fi
if [ $dbs21F -lt 1000 ]
then
echo "$dbs21 has less than 1000 pages left, ALLOC: $dbs21A, FREE:
$dbs21F" |
mail informix root
fi
if [ $dbs22F -lt 1000 ]
then
echo "$dbs22 has less than 1000 pages left, ALLOC: $dbs22A, FREE:
$dbs22F" |
mail informix root
fi
if [ $dbs23F -lt 1000 ]
then
echo "$dbs23 has less than 1000 pages left, ALLOC: $dbs23A, FREE:
$dbs23F" |
mail informix root
fi
if [ $dbs24F -lt 1000 ]
then
echo "$dbs24 has less than 1000 pages left, ALLOC: $dbs24A, FREE:
$dbs24F" |
mail informix root
fi
if [ $dbs25F -lt 1000 ]
then
echo "$dbs25 has less than 1000 pages left, ALLOC: $dbs25A, FREE:
$dbs25F" |
mail informix root
fi
if [ $dbs26F -lt 1000 ]
then
echo "$dbs26 has less than 1000 pages left, ALLOC: $dbs26A, FREE:
$dbs26F" |
mail informix root
fi
if [ $dbs27F -lt 1000 ]
then
echo "$dbs27 has less than 1000 pages left, ALLOC: $dbs27A, FREE:
$dbs27F" |
mail informix root
fi

exit 0

-----Original Message-----
From: Jack Parker [mailto:vze2qjg5@verizon.net]
Sent: Thursday, January 30, 2003 4:47 PM
To: ids@iiug.org; Phillips, Rob
Subject: Re: Informix Lockup (Wierd) [184]


Is there anything else this database is supposed to do other than run update
statistics? You might want to back off on the frequency of running that
script a touch unless your database is incredibly volatile. I'd opt for
once a week perhaps. Change that and then see if things get better (i.e.
fix the gaping wound in your stomach before worrying about the headache).

cheers
j.
----- Original Message -----
From: "Phillips, Rob" <RPhillips@ce-a.com>
To: <ids@iiug.org>
Sent: Thursday, January 30, 2003 3:14 PM
Subject: Informix Lockup (Wierd) [184]


> Occasionally I am experiencing Informix locking up. It won't accept any
db
> connections, or process any SQL commands on connections that are currently
> open. We are running on a dual processor box and durring this lockup I am
> noticing that 1 cpu is maxed out with 100% usr processes. The other
> processor is doing virtually nothing.
>
> The informix was non responding for approximately 30 minutes. When it
came
> back online I checked some logs. The lockup started at approximately 2pm
> (14:00 hrs) Here is what the logs said:
>
> 13:27:51 Checkpoint Completed: duration was 1 seconds.
> 13:35:32 Logical Log 51508 Complete.
> 13:35:35 Logical Log 51508 - Backup Started
> 13:35:36 Logical Log 51508 - Backup Completed
> 13:43:22 Logical Log 51509 Complete.
> 13:43:25 Logical Log 51509 - Backup Started
> 13:43:26 Logical Log 51509 - Backup Completed
> 13:51:04 Logical Log 51510 Complete.
> 13:51:07 Logical Log 51510 - Backup Started
> 13:51:07 Logical Log 51510 - Backup Completed
> 13:57:53 Checkpoint Completed: duration was 1 seconds.
> 13:59:13 Logical Log 51511 Complete.
> 13:59:16 Logical Log 51511 - Backup Started
> 13:59:16 Logical Log 51511 - Backup Completed
> 14:27:24 listener-thread: err = -25573: oserr = 72: errstr = : Network
> driver cannot accept a connection on the port.
> System error = 72.
> 14:27:24 listener-thread: err = -25587: oserr = 0: errstr = : Network
> receive failed.
>
> 14:27:24 listener-thread: err = -25573: oserr = 72: errstr = : Network
> driver cannot accept a connection on the port.
> System error = 72.
> 14:27:24 listener-thread: err = -25573: oserr = 72: errstr = : Network
> driver cannot accept a connection on the port.
> System error = 72.
>
> This last error "Network driver cannot accept a connection on the port."
> repeated approximately 50 times until
>
> 14:27:55 listener-thread: err = -25573: oserr = 72: errstr = : Network
> driver cannot accept a connection on the port.
> System error = 72.
>
> Another (possibly helpful) bit of information was that a updateStatistics
> script is set to run every hour on the hour. Here is the
> updatestatistics.sh that runs. I'm thinking that it was possibly this
> script that caused the lockup becuase both the script and the lockup
started
> at the same time. However the lockup occurs very infrequently.. maybe
once
> every other month at most. I am no informix DBA but I'm all we have so if
> anyone could look over this script or has experienced a similar/same
problem
> could you PLEASE HELP. Thank you very much.
>
> #!/usr/bin/ksh
> # This program runs update statistics for a database in parallel
> # by invoking separate connections for each table
> #
> # USAGE : updatestat.sh <dbname> <run_options> <no_of_processes>
> #
> # AUTHOR : Varadharajan Kope mkv@infogain.com
> #
> # NOTE :-
> # * Change the value of US_DIR and test before use. US_DIR is the
> directory
> # where the update statistics scripts and execution outputs are kept.
> # A separate file with the name <tablename>.sql is created for each
> table.
> # * Run 'update statistics' on the database once before using this script
> # for the first time for best results.
> # * Distribution level selection is based on nrows. SMALL_TAB specifies
> # the no. of rows for a small table and LARGE_TAB, for a large table.
> # * To know information about the arguments, type prl_us.sh at the prompt
> # * Do not run update statistics very often. Once a week would be fine.
> #
> # DISCLAIMER :-
> # The author is not responsible for any damage this script could cause
> # to your system performance. But, for any performance improvement, he
> is.
> # USE IT AT YOUR OWN RISK.
>
> #--LOCAL MODIFICATIONS--------
> # Carl M. Barnes, Data Basics International
> # modified for CEA
> # set informix vars
>
> INFORMIXDIR=/informix
> ONCONFIG=onconfig.ows
> INFORMIXSERVER=ol_cea_shm
> INFORMIXSQLHOSTS=/informix/etc/sqlhosts
> TERM=vt100
> TERMCAP=/informix/etc/termcap
>
PATH=.:/informix/bin:/informix/bin:.:/usr/ccs/bin:/usr/ccs/lib:/bin:/usr/bin
> :/usr/sbin:/usr/etc:/usr/ucb
>
> export INFORMIXDIR INFORMIXSERVER ONCONFIG INFORMIXSQLHOSTS PATH TERM
> TERMCAP
>
> ##-cmbexport US_DIR=/dataconv/$DBNAME.stats
>
> #-----------------------------
>
> # ---------------------------------------------------------------------
> # This function generates update statistics scripts for a table
> # using the strategy suggested by Informix in the performance guide
> # and the release notes which is
> # 1. MEDIUM on all columns that are not part of an index as a single
> statement
> # with distributions only.
> # 2. HIGH on all columns that are part of an index as separate
statements.
> # 3. For indexes that begin with the same subset of columns,
> # run HIGH for the first column in each index that differs.
> #
> gen_us()
> {
> DBNAME=$1
> TABNAME=$2
>
> echo "set lock mode to wait;" > $TABNAME.sql
> echo "set optimization all_rows;" >> $TABNAME.sql
> # echo "set isolation to dirty read;" >> $TABNAME.sql
>
>
> # for small tables run update statistics HIGH
>
> eval dbaccess $DBNAME 2>/dev/null 1>&2 <<EOH
>
> unload to "d.out" delimiter ';'
> select 'update statistics high for table ' || tabname
> from systables
> where tabname = "$TABNAME"
> and nrows < 1000 ;
>
> EOH
>
> if [ `cat d.out | wc -l` -ne 0 ]
> then
> cat d.out >>$TABNAME.sql
> rm d.out
> return
> fi
>
> # Identify columns that differ where indexes start with the same columns
>
> TABID=`get_tabid $DBNAME $TABNAME | sed '1,4d'`
>
> echo $TABID
>
> eval dbaccess $DBNAME 2>/dev/null 1>&2 <<EOF
>
> set optimization all_rows;
>
> select tabid, idxname, abs(part1) col, 1 part from sysindexes
> where part1 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part2) col, 2 part from sysindexes
> where part2 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part3) col, 3 part from sysindexes
> where part3 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part4) col, 4 part from sysindexes
> where part4 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part5) col, 5 part from sysindexes
> where part5 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part6) col, 6 part from sysindexes
> where part6 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part7) col, 7 part from sysindexes
> where part7 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part8) col, 8 part from sysindexes
> where part8 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part9) col, 9 part from sysindexes
> where part9 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part10) col, 10 part from sysindexes
> where part10 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part11) col, 11 part from sysindexes
> where part11 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part12) col, 12 part from sysindexes
> where part12 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part13) col, 13 part from sysindexes
> where part13 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part14) col, 14 part from sysindexes
> where part14 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part15) col, 15 part from sysindexes
> where part15 != 0 and tabid = $TABID
> union
> select tabid, idxname, abs(part16) col, 16 part from sysindexes
> where part16 != 0 and tabid = $TABID
> INTO TEMP mytmp ;
>
> -- select m1.tabid, m1.idxname, m2.idxname idxname2, max(m1.part) part
> -- from mytmp m1, mytmp m2
> -- where
> -- m1.part = m2.part
> -- and m1.col = m2.col
> -- and m1.idxname != m2.idxname
> ---- and m1.rowid != m2.rowid
> -- group by 1, 2, 3
> -- into temp mytmp2;
>
> -- select * from mytmp2 ;
>
> -- select col from mytmp
> -- where part = 1
> -- into temp high_col;
>
> select unique col from mytmp
> into temp high_col;
>
> -- insert into high_col
> -- select m1.col
> -- from mytmp m1, mytmp2 m2
> -- where m2.idxname = m1.idxname
> -- and m1.part = ( m2.part + 1 )
> -- and exists ( select * from mytmp
> -- where mytmp.idxname = m2.idxname2
> -- and mytmp.part = ( m2.part + 1) ) ;
>
> select * from high_col ;
>
> -- MEDIUM - all columns in a table which do not require high distribution
> unload to "a.out" delimiter '|'
> select distinct colname
> from systables t, syscolumns c
> where t.tabid = c.tabid
> and t.tabid = $TABID
> and c.colno not in ( select col from high_col );
>
> -- HIGH distribution - columns that head an index, one US statement per
> index
> output to "b.out" without headings
> select distinct tabname || '( ' || colname || ' ) ;'
> from systables, high_col, syscolumns
> where systables.tabid = $TABID
> and syscolumns.tabid = $TABID
> and syscolumns.colno = high_col.col;
>
>
> -- generate statement to include fields that requires low distribution
> -- for each multi column index, US low on all of its columns
> -- unload to "c.out" delimiter '|'
> -- select 'update statistics high for table ' || tabname || '(' ||
> c1.colname,
> -- c2.colname, c3.colname, c4.colname, c5.colname,
> -- c6.colname, c7.colname, c8.colname, c9.colname, c10.colname,
> -- c11.colname, c12.colname, c13.colname, c14.colname,
> -- c15.colname, c16.colname
> -- from systables t, sysindexes i,
> -- syscolumns c1 , outer syscolumns c2 , outer syscolumns c3 ,
> -- outer syscolumns c4, outer syscolumns c5 , outer syscolumns c6 ,
> -- outer syscolumns c7 , outer syscolumns c8, outer syscolumns c9 ,
> -- outer syscolumns c10, outer syscolumns c11, outer syscolumns c12,
> -- outer syscolumns c13, outer syscolumns c14, outer syscolumns c15,
> -- outer syscolumns c16
> -- where t.tabid = i.tabid
> -- and c1.tabid = i.tabid and abs(i.part1) = c1.colno
> -- and c2.tabid = i.tabid and abs(i.part2) = c2.colno
> -- and c3.tabid = i.tabid and abs(i.part3) = c3.colno
> -- and c4.tabid = i.tabid and abs(i.part4) = c4.colno
> -- and c5.tabid = i.tabid and abs(i.part5) = c5.colno
> -- and c6.tabid = i.tabid and abs(i.part6) = c6.colno
> -- and c7.tabid = i.tabid and abs(i.part7) = c7.colno
> -- and c8.tabid = i.tabid and abs(i.part8) = c8.colno
> -- and c9.tabid = i.tabid and abs(i.part9) = c9.colno
> -- and c10.tabid = i.tabid and abs(i.part10) = c10.colno
> -- and c11.tabid = i.tabid and abs(i.part11) = c11.colno
> -- and c12.tabid = i.tabid and abs(i.part12) = c12.colno
> -- and c13.tabid = i.tabid and abs(i.part13) = c13.colno
> -- and c14.tabid = i.tabid and abs(i.part14) = c14.colno
> -- and c15.tabid = i.tabid and abs(i.part15) = c15.colno
> -- and c16.tabid = i.tabid and abs(i.part16) = c16.colno
> -- and t.tabname = "$TABNAME"
> -- and i.part2 is not null
> -- and abs(i.part2) > 0 ;
>
> EOF
>
> if [ `cat a.out | wc -l` -ne 0 ]
> then
> ( echo $TABNAME ; cat a.out ) |
> sed '1,1s/^/update statistics medium for table /' |
> sed '1,1s/$/(/' |
> sed '2,$s/|/,/' | sed '$,$s/,$/ ) distributions only ;/' >>$TABNAME.sql
> rm a.out
> fi
>
> if [ `cat b.out | wc -l` -ne 2 ]
> then
> # Write non-empty lines.
> sed '
> /./ {
> p
> d
> }
> :Empty
> /^$/ {
> N
> s/.//
> b Empty
> }' b.out | sed '1,$s/^/update statistics high for table
/'
> >>$TABNAME.sql
> rm b.out
> fi
>
> ## if [ `cat c.out | wc -l` -ne 0 ]
> ## then
> ## sed '1,$s/||.*/);/' c.out | sed '1,$s/|/,/g' >>$TABNAME.sql
> ## rm c.out
> ## fi
>
> }
>
>
> # ---------------------------------------------------------------------
> # This function generates the list of tables on which update statistics
> # would be run. This function would not select system catalog tables.
>
> gen_tab_list()
> {
> DBNAME=$1
>
> # Read system catalog information to generate the list of tables
>
> dbaccess $DBNAME 2>/dev/null <<EOQ1
> unload to "tables.list" delimiter " "
> select tabname
> from systables
> where tabtype = "T";
>
> EOQ1
>
> }
>
> # ---------------------------------------------------------------------
> # This function gets the tabid for a given table and database
> get_tabid()
> {
> DBNAME=$1
> TABNAME=$2
> dbaccess $DBNAME 2>/dev/null <<EOF
> select tabid from systables where tabname = "$TABNAME"
> EOF
> }
>
> # ---------------------------------------------------------------------
> # This function generate the update statistics script for all tables
> gen_sql()
> {
> DBNAME=$1
>
> echo "Update Statistics script files being generated for $DBNAME"
> echo
>
> gen_tab_list $DBNAME
>
> # generate script for tables
> for i in `cat tables.list`
> do
> echo Generating script for table $i
> gen_us $DBNAME $i
> done
>
> echo
> echo "Update Statistics script files generation completed for $DBNAME"
> }
>
>
> # ---------------------------------------------------------------------
> # This function executes the scripts in parallel
> run_sql()
> {
>
> echo
> echo "Update Statistics scripts started running for $DBNAME"
>
> DBNAME=$1
>
> CURR=0
>
> gen_tab_list $DBNAME
> rm *.out
>
> # run update statistics for all tables
> for i in `cat tables.list`
> do
>
> # Generate SQL script file
> if [ ! -f $US_DIR/$i.sql ]
> then
> gen_us $DBNAME $i
> fi
>
> echo Updating statistics for $i
>
> # run the update statistics script for the current table
> echo Started `date` >$US_DIR/$i.out
> ( dbaccess -e $DBNAME $US_DIR/$i.sql >>$US_DIR/$i.out 2>&1 ;
> echo Completed `date` >>$US_DIR/$i.out ) &
>
> # check to see if desired number of processes are running in parallel.
> # wait for a process to complete if all are running.
> CURR=`ps -lef | grep $US_DIR | grep -v grep | wc -l`
>
> while [ $CURR -eq $COUNT ]
> do
> sleep 5
> CURR=`ps -lef | grep $US_DIR | grep -v grep | wc -l`
> done
>
> done # loop - tables
> wait
>
> dbaccess $DBNAME <<EOS
> update statistics for procedure ;
> EOS
>
> echo "Update Statistics run completed for $DBNAME"
>
> exit
> }
>
> # ---------------------------------------------------------------------
> # MAIN - Start of the program execution
>
> if [ $# -lt 1 -o $# -gt 3 ]
> then
> echo "Usage : $0 <dbname> <run_options> <processes>"
> echo
> echo "dbname - database name, required"
> echo "run_options - "
> echo " BOTH - Generates SQL and RUN, the default"
> echo " SQL - Generates SQL files only"
> echo " RUN - Runs SQL files only"
> echo " if SQL file does not exist for a table, generates
> it"
> echo "processes - No of parallel threads/processes "
> echo " - applicable for RUN and BOTH, defaults to 1 "
> echo
> echo "NOTE :-"
> echo
> echo "Incorrect values for <run_options> and <processes> will use
> defaults"
> echo
> exit
> fi
>
> DBNAME=$1
>
> export US_DIR=/dataconv/$DBNAME.stats
> export SMALL_TAB=1000
> export LARGE_TAB=1000000
>
> if [ -f $US_DIR ]
> then
> echo "Error : $US_DIR is a file, not a directory. STOP"
> exit
> fi
> if [ ! -d $US_DIR ]
> then
> mkdir $US_DIR
> if [ $? -gt 0 ]
> then
> echo "Error : Unable to create/use $US_DIR. STOP"
> exit
> fi
> fi
> cd $US_DIR
>
> if [ $# -eq 2 ]
> then
> RUN_OPT=$2
> COUNT=1
> elif [ $# -eq 3 ]
> then
> RUN_OPT=$2
> COUNT=$3
> else
> RUN_OPT="BOTH"
> COUNT=1
> fi
>
> COUNT=`expr $COUNT + 0 2>/dev/null`
>
> if [ $? -gt 0 ]
> then
> COUNT=1
> fi
>
> if [ "$RUN_OPT" = "SQL" ]
> then
> gen_sql $DBNAME
> elif [ "$RUN_OPT" = "RUN" ]
> then
> run_sql $DBNAME $COUNT
> else
> gen_sql $DBNAME
> run_sql $DBNAME $COUNT
> fi
>
> exit
>
>
> Robert Phillips
> Systems Analyst
> Chamberlin Edmonds and Associates (www.ce-a.com)
> 404-634-5196 x1261
>
>

Messages In This Thread

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum is maintained by Administrator with WebBBS 5.12.