Join IIUG
 for   
 

Informix News
18 Nov 13 - ZDNet - Top 20 mobile skills in demand... Read
09 Sep 13 - telecompaper - Shaspa and Tatung have shown a new smart home platform at Ifa in Berlin. Powered by the IBM Informix software... Read
06 Sep 13 - IBM data magazine - Mission Accomplished - Miami, Florida will be the backdrop for the 2014 IIUG Informix Conference... Read
01 Feb 13 - IBM Data Magazine - Are your database backups safe? Lester Knutsen (IBM Champion) writes about database back up safety using "archecker"... Read
14 Nov 12 - IBM - IBM's Big Data For Smart Grid Goes Live In Texas... Read
3 Oct 12 - The Financial - IBM and TransWorks Collaborate to Help Louisiana-Pacific Corporation Achieve Supply Chain Efficiency... Read
28 Aug 12 - techCLOUD9 - Splunk kicks up a SaaS Storm... Read
10 Aug 12 - businessCLOUD9 - Is this the other half of Cloud monitoring?... Read
3 Aug 12 - IBM data management - Supercharging the data warehouse while keeping costs down IBM Informix Warehouse Accelerator (IWA) delivers superior performance for in-memory analytics processing... Read
2 Aug 12 - channelbiz - Oninit Group launches Pay Per Pulse cloud-based service... Read
28 May 12 - Bloor - David Norfolk on the recent Informix benchmark "pretty impressive results"... Read
23 May 12 - DBTA - Informix Genero: A Way to Modernize Informix 4GL Applications... Read
9 Apr 12 - Mastering Data Management - Upping the Informix Ante: Advanced Data Tools... Read
22 Mar 12 - developerWorks - Optimizing Informix database access... Read
14 Mar 12 - BernieSpang.com - International Informix User Group set to meet in San Diego... Read
1 Mar 12 - IBM Data Management - IIUG Heads West for 2012 - Get ready for sun and sand in San Diego... Read
1 Mar 12 - IBM Data Management - Running Informix on Solid-State Drives.Speed Up Database Access... Read
26 Feb 12 - BernieSpan.com - Better results, lower cost for a broad set of new IBM clients and partners... Read
24 Feb 12 - developerWorks - Informix Warehouse Accelerator: Continuous Acceleration during Data Refresh... Read
6 Feb 12 - PRLOG - Informix port delivers unlimited database scalability for popular SaaS application ... Read
2 Feb 12 - developerWorks - Loading data with the IBM Informix TimeSeries Plug-in for Data Studio... Read
1 Feb 12 - developerWorks - 100 Tech Tips, #47: Log-in to Fix Central... Read
13 Jan 12 - MC Press online - Informix Dynamic Server Entices New Users with Free Production Edition ... Read
11 Jan 12 - Computerworld - Ecologic Analytics and Landis+Gyr -- Suitors Decide to Tie the Knot... Read
9 Jan 12 - planetIDS.com - DNS impact on Informix / Impacto do DNS no Informix... Read
8 Sep 11 - TMCnet.com - IBM Offers Database Solution to Enable Smart Meter Data Capture... Read
1 Aug 11 - IBM Data Management Magazine - IIUG user view: Happy 10th anniversary to IBM and Informix... Read
8 Jul 11 - Database Trends and Applications - Managing Time Series Data with Informix... Read
31 May 11 - Smart Grid - The meter data management pitfall utilities are overlooking... Read
27 May 11 - IBM Data Management Magazine - IIUG user view: Big data, big time ( Series data, warehouse acceleration, and 4GLs )... Read
16 May 11 - Business Wire - HiT Software Announces DBMoto for Enterprise Integration, Adds Informix. Log-based Change Data Capture... Read
21 Mar 11 - Yahoo! Finance - IBM and Cable&Wireless Worldwide Announce UK Smart Energy Cloud... Read
14 Mar 11 - MarketWatch - Fuzzy Logix and IBM Unveil In-Database Analytics for IBM Informix... Read
11 Mar 11 - InvestorPlace - It's Time to Give IBM Props: How many tech stocks are up 53% since the dot-com boom?... Read
9 Mar 11 - DBTA - Database Administration and the Goal of Diminishing Downtime... Read
2 Feb 11 - DBTAs - Informix 11.7 Flexible Grid Provides a Different Way of Looking at Database Servers... Read
27 Jan 11 - exactsolutions - Exact to Add Informix Support to Database Replay, SQL Monitoring Solutions... Read
25 Jan 11 - PR Newswire - Bank of China in the UK Works With IBM to Become a Smarter, Greener Bank... Read
12 Oct 10 - Database Trends and Applications - Informix 11.7: The Beginning of the Next Decade of IBM Informix... Read
20 Sep 10 - planetIDS.com - ITG analyst paper: Cost/Benefit case for IBM Informix as compared to Microsoft SQL Server... Read
20 Jul 10 - IBM Announcements - IBM Informix Choice Edition V11.50 helps deploy low-cost scalable and reliable solutions for Apple Macintosh and Microsoft Windows... Read
20 Jul 10 - IBM Announcements - Software withdrawal: Elite Support for Informix Ultimate-C Edition... Read
24 May 10 - eWeek Europe - IBM Supplies Database Tech For EU Smart Grid... Read
23 May 10 - SiliconIndia - IBM's smart metering system allows wise use of energy... Read
21 May 10 - CNET - IBM to help people monitor energy use... Read
20 May 10 - ebiz - IBM Teams With Hildebrand To Bring Smart Metering To Homes Across Britain... Read
19 May 10 - The New Blog Times - Misurare il consumo energetico: DEHEMS è pronto... Read
19 May 10 - ZDNet - IBM software in your home? Pact enables five-city smart meter pilot in Europe... Read
17 March 10 - ZDNet (blog) David Morgenstern - TCO: New research finds Macs in the enterprise easier, cheaper to manage than... Read
17 March 2010 - Virtualization Review - ...key components of Big Blue's platform to the commercial cloud such as its WebSphere suite of application ser vers and its DB2 and Informix databases... Read
10 February 2010 - The Wall Street Journal - International Business Machines is expanding an initiative to win over students and professors on its products. How do they lure the college crowd?... Read


End of Support Dates

IIUG on Facebook IIUG on Twitter

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum

RE: Using High Perf Loader to unload entire databa

Posted By: Tim Ertl
Date: Wednesday, 20 August 2008, at 5:09 p.m.

In Response To: Using High Perf Loader to unload entire database? (Jonathan Smaby)

HPL is Great! I have used it for single tables before but recently I needed
to do a migration to new hardware and with help from this list I was able to
do a lot of tables at the same time. On my old hardware there was a point
where to many tables slowed things down but you CAN do more than one at a
time using scripts to drive the onpload'er rather than using the ipload gui.
The IBM doc's are weak on the subject but someone sent me a sample script
that would take a list of tables and build the jobs which I could run in any
order or in any quantity.

Attached is an email about the subject and an email with a custom script a
guy sent me that does a real nice job of automating the building of the jobs
to run in parallel.

Tim Ertl
413-442-9000 x6211

-----Original Message-----
From: ids-bounces@iiug.org [mailto:ids-bounces@iiug.org] On Behalf Of
Jonathan Smaby
Sent: Wednesday, August 20, 2008 1:49 PM
To: ids@iiug.org
Subject: Using High Perf Loader to unload entire database? [13164]

Good morning,

Well, now that we can begun to setup tables using Advanced Data-Types,
specifically LVARCHAR and CLOB types, we can no longer use Informix native
utilities ONUNLOAD and ONLOAD because both of those utilities only support
legacy data-types. So, I have been fuddling around with Informix high
performance loader with onpload and ipload (GUI) and I noticed that I can
only do one table at a time :o( I=B9m totally bummed. I recall something
something from one of the IIUG conference sessions earlier this year that i=

t
was possible to unload and entire DB to file. My question is, has anyone
done that with HPL? Reason why I am asking is that we have a test and
training environment for my programmers with Informix that I refresh the
databases with onload, and now I would like to use HPL to support advanced
data-types.

Thank-you very much in advance for any insight, wisdom, or experience with
this utility.

Jonathan Smaby
Pomona College
---
My favorite quote about the handling of the Iraq war: If you try to fail,
and succeed, which have you done? ~George Carlin (1937-2008)

-------------------------------------------------------------
This message has been scanned by Postini anti-virus software.
=0D

****************************************************************************
***
Forum Note: Use "Reply" to post a response in the discussion forum.

Tim,

Glad to hear it worked for you. We are still running 9.40 and 7.31 so the
multiple buffer sizes have never come into play for us. By the way, there
was a series of three articles on HPL in an IBM on-line magazine. You might
find them interesting. Here are the links:

http://www.ibmdatabasemag.com/blog/main/archives/2008/05/the_informix_hi.htm
l

http://www.ibmdatabasemag.com/blog/main/archives/2008/06/the_informix_hi_1.h
tml

http://www.ibmdatabasemag.com/blog/main/archives/2008/07/the_informix_hi_2.h
tml

Rob Schmitz
Embarq Data Management
913-534-3474
rob.b.schmitz@embarq.com
www.embarq.com

-----Original Message-----
From: ids-bounces@iiug.org [mailto:ids-bounces@iiug.org] On Behalf Of Tim
Ertl
Sent: Sunday, July 27, 2008 3:41 PM
To: ids@iiug.org
Subject: RE: ONPLADM , IPLOAD & ONPLOAD scripts [12915]

Bob, I spent the day today using your utility to perform our conversion in 5
hours TOTAL without removing any data before hand. My previous attempt was
estimated to be over 32hours. This was GREAT! Many Many thanks for your
help!

Tim Ertl
413-442-9000 x6211

-----Original Message-----
From: ids-bounces@iiug.org [mailto:ids-bounces@iiug.org] On Behalf Of
Schmitz, Rob B [EQ]
Sent: Friday, July 25, 2008 12:20 PM
To: ids@iiug.org
Subject: RE: ONPLADM , IPLOAD & ONPLOAD scripts [12904]

Tim,

I have such a script. It is not intended to cover all situations, just those

our group has run into. It serves our needs. You might be able to use it as
is
or you might be able to start with it and then modify going forward. The
script is named hpl_create.ksh and it uses a text file named
hpl_create.list.
The hpl_create.list file has one line per table. Each line contains the name

of the table, the number of devices (unload files), the letter e or d
indicating express or deluxe mode, and the root name of the job and
device(s).
The job name will be the root name with "_job" tacked on the end and the
device name will be the root name with "_dvc" tacked on the end.

For example,

mytab 12 e mytab

Will create both a load and an unload job named mytab_job with 12 unload
files
defined by device name mytab_dvc. The jobs will use express mode.

There are restrictions on using express mode (row size, extended data types,

etc.) and the script performs a few rudimentary checks and may change an
express specification to deluxe if needed.

The script expects you to pass the instance name, the database name, and the

path for the location of your unload files (device path). With all of our
scripts, we source a script that sets our environment for a particular
instance. Also, what we call an instance is essentially the extent name of
the
onconfig file and we use it for many things to identify an instance. It is
different (usually shorter) than the INFORMIXSERVER value.

The first time a job is created for a table with this script, you will see
several errors related to the "delete job" lines in the script. As I say,
this
is a somewhat crude script that meets our needs and it has never been worth
our time to make it squeaky clean.

This script was not written to meet anyone's needs but our own; however, you

may find some useful bits in it. (caveat lector)

Here is the text of the script:

----------------------------------------------------------------------------
-------------

#!/usr/bin/ksh

#---------------------------------------------------------------------------
----
# hpl_create.ksh
#
# Notes on job creation options:
# -flu means create both an unload job and a load job
# adding an "a" means treat data source as a device array
# adding a "c" means deluxe mode
# adding an "N" means it is a deluxe mode without replication
# adding an "n" means no-conversion (this is fastest, must be express)
# Without the "c" option, the mode will default to express

#---------------------------------------------------------------------------
----

if [ "$#" -lt 3 ]
then

echo "USAGE: `basename $0` instance database device_path"

exit 1
else

export INST=$1

export DB=$2

export DEVICE_PATH=$3
fi

... ~informix/infx_env $INST

#---------------------------------------------------------------------------
----
# Create the jobs

#---------------------------------------------------------------------------
----

cat hpl_create.list | tr '{A-Z}' '{a-z}' \
| while read TAB NUM_DEVICES MODE JOBNAME_ROOT
do

echo " unload to hpl_create.tabinfo delimiter ' '

select rowsize

from systables

where tabname = '${TAB}'" \

| dbaccess $DB > /dev/null 2>&1

if [ `cat hpl_create.tabinfo | wc -l` -eq 0 ]

then

echo "Table $TAB does not exist in $DB"

else

export ROWSIZE=`cat hpl_create.tabinfo`

export PAGESIZE=`onstat -b|grep "buffer size"|awk '{print ($(NF-2))}'`

if [ "$ROWSIZE" -ge "$PAGESIZE" -a "$MODE" = "e" ]

then

echo "Changing the mode to Deluxe for $TAB because row size exceeds page
size."

export MODE="d"

fi

export JOBNAME=${JOBNAME_ROOT}_job

export DVCNAME=${JOBNAME_ROOT}_dvc

onpladm delete device $DVCNAME

onpladm delete job $JOBNAME -fl -R

onpladm delete job $JOBNAME -fu -R

echo "BEGIN OBJECT DEVICEARRAY ${DVCNAME}" > ${TAB}_device_specfile

while [ "$NUM_DEVICES" -gt 0 ]

do

echo "BEGIN SEQUENCE" >> ${TAB}_device_specfile

echo "TYPE FILE" >> ${TAB}_device_specfile

echo "FILE $DEVICE_PATH/${TAB}.unl${NUM_DEVICES}" >> ${TAB}_device_specfile

echo "TAPEBLOCKSIZE 0" >> ${TAB}_device_specfile

echo "TAPEDEVICESIZE 0" >> ${TAB}_device_specfile

echo "PIPECOMMAND" >> ${TAB}_device_specfile

echo "END SEQUENCE" >> ${TAB}_device_specfile

let NUM_DEVICES=NUM_DEVICES-1

done

echo "END OBJECT" >> ${TAB}_device_specfile

onpladm create object -F ${TAB}_device_specfile

if [ "$MODE" = "d" ]

then

onpladm create job $JOBNAME -d ${DVCNAME} -fluaN -D $DB -t $TAB

else

onpladm create job $JOBNAME -d ${DVCNAME} -flua -n -D $DB -t $TAB

fi

fi
done

#---------------------------------------------------------------------------
----
# cleanup the files

#---------------------------------------------------------------------------
----

rm -f hpl_create.tabinfo
rm -r *_device_specfile

----------------------------------------------------------------------------
-------------

Rob Schmitz
Embarq Data Management
rob.b.schmitz@embarq.com
www.embarq.com

-----Original Message-----
From: ids-bounces@iiug.org [mailto:ids-bounces@iiug.org] On Behalf Of Tim
Ertl
Sent: Thursday, July 24, 2008 4:39 PM
To: ids@iiug.org
Subject: ONPLADM , IPLOAD & ONPLOAD scripts [12892]

Does anyone already have scripts to make up the files to make up the files
to input to onpladm that would create the backup jobs/devices/etc.... for
onpladm?

I wish to backup with onpload all of my 126 tables with a single command. I
can make up the jobs with the GUI and then run them at the time of
conversion but there is a fair amount of typing.

Thanks for your help. Onpload really goes FAST! Amazing fast!

Tim Ertl
413-442-9000 x6211

****************************************************************************
***
Forum Note: Use "Reply" to post a response in the discussion forum.

****************************************************************************
***
Forum Note: Use "Reply" to post a response in the discussion forum.

****************************************************************************
***
Forum Note: Use "Reply" to post a response in the discussion forum.

Tim,

I have such a script. It is not intended to cover all situations, just those

our group has run into. It serves our needs. You might be able to use it as
is
or you might be able to start with it and then modify going forward. The
script is named hpl_create.ksh and it uses a text file named
hpl_create.list.
The hpl_create.list file has one line per table. Each line contains the name

of the table, the number of devices (unload files), the letter e or d
indicating express or deluxe mode, and the root name of the job and
device(s).
The job name will be the root name with "_job" tacked on the end and the
device name will be the root name with "_dvc" tacked on the end.

For example,

mytab 12 e mytab

Will create both a load and an unload job named mytab_job with 12 unload
files
defined by device name mytab_dvc. The jobs will use express mode.

There are restrictions on using express mode (row size, extended data types,

etc.) and the script performs a few rudimentary checks and may change an
express specification to deluxe if needed.

The script expects you to pass the instance name, the database name, and the

path for the location of your unload files (device path). With all of our
scripts, we source a script that sets our environment for a particular
instance. Also, what we call an instance is essentially the extent name of
the
onconfig file and we use it for many things to identify an instance. It is
different (usually shorter) than the INFORMIXSERVER value.

The first time a job is created for a table with this script, you will see
several errors related to the "delete job" lines in the script. As I say,
this
is a somewhat crude script that meets our needs and it has never been worth
our time to make it squeaky clean.

This script was not written to meet anyone's needs but our own; however, you

may find some useful bits in it. (caveat lector)

Here is the text of the script:

----------------------------------------------------------------------------
-------------

#!/usr/bin/ksh

#---------------------------------------------------------------------------
----
# hpl_create.ksh
#
# Notes on job creation options:
# -flu means create both an unload job and a load job
# adding an "a" means treat data source as a device array
# adding a "c" means deluxe mode
# adding an "N" means it is a deluxe mode without replication
# adding an "n" means no-conversion (this is fastest, must be express)
# Without the "c" option, the mode will default to express

#---------------------------------------------------------------------------
----

if [ "$#" -lt 3 ]
then

echo "USAGE: `basename $0` instance database device_path"

exit 1
else

export INST=$1

export DB=$2

export DEVICE_PATH=$3
fi

.. ~informix/infx_env $INST

#---------------------------------------------------------------------------
----
# Create the jobs

#---------------------------------------------------------------------------
----

cat hpl_create.list | tr '{A-Z}' '{a-z}' \
| while read TAB NUM_DEVICES MODE JOBNAME_ROOT
do

echo " unload to hpl_create.tabinfo delimiter ' '

select rowsize

from systables

where tabname = '${TAB}'" \

| dbaccess $DB > /dev/null 2>&1

if [ `cat hpl_create.tabinfo | wc -l` -eq 0 ]

then

echo "Table $TAB does not exist in $DB"

else

export ROWSIZE=`cat hpl_create.tabinfo`

export PAGESIZE=`onstat -b|grep "buffer size"|awk '{print ($(NF-2))}'`

if [ "$ROWSIZE" -ge "$PAGESIZE" -a "$MODE" = "e" ]

then

echo "Changing the mode to Deluxe for $TAB because row size exceeds page
size."

export MODE="d"

fi

export JOBNAME=${JOBNAME_ROOT}_job

export DVCNAME=${JOBNAME_ROOT}_dvc

onpladm delete device $DVCNAME

onpladm delete job $JOBNAME -fl -R

onpladm delete job $JOBNAME -fu -R

echo "BEGIN OBJECT DEVICEARRAY ${DVCNAME}" > ${TAB}_device_specfile

while [ "$NUM_DEVICES" -gt 0 ]

do

echo "BEGIN SEQUENCE" >> ${TAB}_device_specfile

echo "TYPE FILE" >> ${TAB}_device_specfile

echo "FILE $DEVICE_PATH/${TAB}.unl${NUM_DEVICES}" >> ${TAB}_device_specfile

echo "TAPEBLOCKSIZE 0" >> ${TAB}_device_specfile

echo "TAPEDEVICESIZE 0" >> ${TAB}_device_specfile

echo "PIPECOMMAND" >> ${TAB}_device_specfile

echo "END SEQUENCE" >> ${TAB}_device_specfile

let NUM_DEVICES=NUM_DEVICES-1

done

echo "END OBJECT" >> ${TAB}_device_specfile

onpladm create object -F ${TAB}_device_specfile

if [ "$MODE" = "d" ]

then

onpladm create job $JOBNAME -d ${DVCNAME} -fluaN -D $DB -t $TAB

else

onpladm create job $JOBNAME -d ${DVCNAME} -flua -n -D $DB -t $TAB

fi

fi
done

#---------------------------------------------------------------------------
----
# cleanup the files

#---------------------------------------------------------------------------
----

rm -f hpl_create.tabinfo
rm -r *_device_specfile

----------------------------------------------------------------------------
-------------

Rob Schmitz
Embarq Data Management
rob.b.schmitz@embarq.com
www.embarq.com

-----Original Message-----
From: ids-bounces@iiug.org [mailto:ids-bounces@iiug.org] On Behalf Of Tim
Ertl
Sent: Thursday, July 24, 2008 4:39 PM
To: ids@iiug.org
Subject: ONPLADM , IPLOAD & ONPLOAD scripts [12892]

Does anyone already have scripts to make up the files to make up the files
to input to onpladm that would create the backup jobs/devices/etc.... for
onpladm?

I wish to backup with onpload all of my 126 tables with a single command. I
can make up the jobs with the GUI and then run them at the time of
conversion but there is a fair amount of typing.

Thanks for your help. Onpload really goes FAST! Amazing fast!

Tim Ertl
413-442-9000 x6211

****************************************************************************
***
Forum Note: Use "Reply" to post a response in the discussion forum.

****************************************************************************
***
Forum Note: Use "Reply" to post a response in the discussion forum.

Messages In This Thread

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum is maintained by Administrator with WebBBS 5.12.