Join IIUG
 for   
 

Informix News
18 Nov 13 - ZDNet - Top 20 mobile skills in demand... Read
09 Sep 13 - telecompaper - Shaspa and Tatung have shown a new smart home platform at Ifa in Berlin. Powered by the IBM Informix software... Read
06 Sep 13 - IBM data magazine - Mission Accomplished - Miami, Florida will be the backdrop for the 2014 IIUG Informix Conference... Read
01 Feb 13 - IBM Data Magazine - Are your database backups safe? Lester Knutsen (IBM Champion) writes about database back up safety using "archecker"... Read
14 Nov 12 - IBM - IBM's Big Data For Smart Grid Goes Live In Texas... Read
3 Oct 12 - The Financial - IBM and TransWorks Collaborate to Help Louisiana-Pacific Corporation Achieve Supply Chain Efficiency... Read
28 Aug 12 - techCLOUD9 - Splunk kicks up a SaaS Storm... Read
10 Aug 12 - businessCLOUD9 - Is this the other half of Cloud monitoring?... Read
3 Aug 12 - IBM data management - Supercharging the data warehouse while keeping costs down IBM Informix Warehouse Accelerator (IWA) delivers superior performance for in-memory analytics processing... Read
2 Aug 12 - channelbiz - Oninit Group launches Pay Per Pulse cloud-based service... Read
28 May 12 - Bloor - David Norfolk on the recent Informix benchmark "pretty impressive results"... Read
23 May 12 - DBTA - Informix Genero: A Way to Modernize Informix 4GL Applications... Read
9 Apr 12 - Mastering Data Management - Upping the Informix Ante: Advanced Data Tools... Read
22 Mar 12 - developerWorks - Optimizing Informix database access... Read
14 Mar 12 - BernieSpang.com - International Informix User Group set to meet in San Diego... Read
1 Mar 12 - IBM Data Management - IIUG Heads West for 2012 - Get ready for sun and sand in San Diego... Read
1 Mar 12 - IBM Data Management - Running Informix on Solid-State Drives.Speed Up Database Access... Read
26 Feb 12 - BernieSpan.com - Better results, lower cost for a broad set of new IBM clients and partners... Read
24 Feb 12 - developerWorks - Informix Warehouse Accelerator: Continuous Acceleration during Data Refresh... Read
6 Feb 12 - PRLOG - Informix port delivers unlimited database scalability for popular SaaS application ... Read
2 Feb 12 - developerWorks - Loading data with the IBM Informix TimeSeries Plug-in for Data Studio... Read
1 Feb 12 - developerWorks - 100 Tech Tips, #47: Log-in to Fix Central... Read
13 Jan 12 - MC Press online - Informix Dynamic Server Entices New Users with Free Production Edition ... Read
11 Jan 12 - Computerworld - Ecologic Analytics and Landis+Gyr -- Suitors Decide to Tie the Knot... Read
9 Jan 12 - planetIDS.com - DNS impact on Informix / Impacto do DNS no Informix... Read
8 Sep 11 - TMCnet.com - IBM Offers Database Solution to Enable Smart Meter Data Capture... Read
1 Aug 11 - IBM Data Management Magazine - IIUG user view: Happy 10th anniversary to IBM and Informix... Read
8 Jul 11 - Database Trends and Applications - Managing Time Series Data with Informix... Read
31 May 11 - Smart Grid - The meter data management pitfall utilities are overlooking... Read
27 May 11 - IBM Data Management Magazine - IIUG user view: Big data, big time ( Series data, warehouse acceleration, and 4GLs )... Read
16 May 11 - Business Wire - HiT Software Announces DBMoto for Enterprise Integration, Adds Informix. Log-based Change Data Capture... Read
21 Mar 11 - Yahoo! Finance - IBM and Cable&Wireless Worldwide Announce UK Smart Energy Cloud... Read
14 Mar 11 - MarketWatch - Fuzzy Logix and IBM Unveil In-Database Analytics for IBM Informix... Read
11 Mar 11 - InvestorPlace - It's Time to Give IBM Props: How many tech stocks are up 53% since the dot-com boom?... Read
9 Mar 11 - DBTA - Database Administration and the Goal of Diminishing Downtime... Read
2 Feb 11 - DBTAs - Informix 11.7 Flexible Grid Provides a Different Way of Looking at Database Servers... Read
27 Jan 11 - exactsolutions - Exact to Add Informix Support to Database Replay, SQL Monitoring Solutions... Read
25 Jan 11 - PR Newswire - Bank of China in the UK Works With IBM to Become a Smarter, Greener Bank... Read
12 Oct 10 - Database Trends and Applications - Informix 11.7: The Beginning of the Next Decade of IBM Informix... Read
20 Sep 10 - planetIDS.com - ITG analyst paper: Cost/Benefit case for IBM Informix as compared to Microsoft SQL Server... Read
20 Jul 10 - IBM Announcements - IBM Informix Choice Edition V11.50 helps deploy low-cost scalable and reliable solutions for Apple Macintosh and Microsoft Windows... Read
20 Jul 10 - IBM Announcements - Software withdrawal: Elite Support for Informix Ultimate-C Edition... Read
24 May 10 - eWeek Europe - IBM Supplies Database Tech For EU Smart Grid... Read
23 May 10 - SiliconIndia - IBM's smart metering system allows wise use of energy... Read
21 May 10 - CNET - IBM to help people monitor energy use... Read
20 May 10 - ebiz - IBM Teams With Hildebrand To Bring Smart Metering To Homes Across Britain... Read
19 May 10 - The New Blog Times - Misurare il consumo energetico: DEHEMS è pronto... Read
19 May 10 - ZDNet - IBM software in your home? Pact enables five-city smart meter pilot in Europe... Read
17 March 10 - ZDNet (blog) David Morgenstern - TCO: New research finds Macs in the enterprise easier, cheaper to manage than... Read
17 March 2010 - Virtualization Review - ...key components of Big Blue's platform to the commercial cloud such as its WebSphere suite of application ser vers and its DB2 and Informix databases... Read
10 February 2010 - The Wall Street Journal - International Business Machines is expanding an initiative to win over students and professors on its products. How do they lure the college crowd?... Read


End of Support Dates

IIUG on Facebook IIUG on Twitter

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum

Re: Using High Perf Loader to unload entire databa

Posted By: MIKE MAGIE
Date: Thursday, 21 August 2008, at 10:16 a.m.

In Response To: Using High Perf Loader to unload entire database? (Jonathan Smaby)

Jonathan,

If you can get away with it, I suggest using the onpladm utility as opposed to the GUI. Onpladm is so much easier to use. Below are some HPL functions from a korn shell script I wrote. It is not a load and go script - you'll need to add some details, and there are some error checking routines and info messages I print that will not apply for you - but the nuts and bolts are here. If you are decent with KSH it should be pretty easy. These functions will create an hpl load job for a table. In my environment I have a load_database script that builds a list of tables that I want to load and then passes the table names to the load_table script. The functions below are from the load_table script. In the load_database script I also manage process load as a single hpl load can fork multiple processes and on a busy server it could cause a load issue. I try to keep the # of concurrent HPL loads under 9. But you may be able to get away with much more. We are somewhat hardware challenged. Anyways if you have any questions email me (sorry for the format - I dunno how to make it look any better):

####################################################################################
# Initial hpl function. This first checks to see if we are loading from another #
# database server, it then checks to see if the onpload db exists. If the db does #
# not exist we call setup_hpl to setup all job details. When the job details are #
# created the onpload db is automatically created. If the onpload db does exist we #
# check to see if there is a load job defined for this table, if there is not we #
# do a second check to see if there are old objects left in the onpload db. If #
# orphans are found they are deleted. After we determine that no objects exist we #
# call setup_hpl to create a load job. If a job is found with all objects defined #
# we call run_hpl to start the job. #
####################################################################################

do_hpl_load()
{

################################################
# Account for loads from other servers. #
################################################

if [[ ${BKUPSERVER} == ${SERVER} ]]
then

JOB_NAME=${DBNAME}:${TABLE}_load

DEV_NAME=${DBNAME}:${TABLE}.load
else

JOB_NAME=${DBNAME}:${TABLE}@${BKUPSERVER}_load

DEV_NAME=${DBNAME}:${TABLE}@${BKUPSERVER}.load
fi

echo "Checking to see if onpload database exists..."

dbaccess sysmaster@${SERVER} - <<EOT!
output to '${WORKDIR}/dbcheck.out' without headings
select count(*) from sysdatabases
where name = "onpload"
EOT!
RET=$?

if [[ ${RET} -gt 0 ]]
then

systrack.sh $0 ERROR Cannot connect to sysmaster@${SERVER} to setup HPL job

echo "Unable to connect to sysmaster@${SERVER}" >> ${WORKDIR}/$$_load_${TABLE}.txt

send_mail
fi

DBCOUNT=`cat ${WORKDIR}/dbcheck.out`

if [[ ${DBCOUNT} -ne 1 ]]
then

echo "onpload database does not exist - no jobs defined."

echo "Calling setup_hpl to create load job, onpload db will be automatically created."

setup_hpl
else

echo "onpload database exists."

echo "Checking to see if ${DBNAME}:${TABLE} has an hpl job defined."

onpladm describe job ${JOB_NAME} -fl > /dev/null 2>&1

RET=$?

if [[ ${RET} -ne 0 ]]

then

echo "No job defined for ${DBNAME}:${TABLE}."

echo "Checking for orphaned hpl devices..."

onpladm describe device ${DEV_NAME} > /dev/null 2>&1

RET=$?

if [[ ${RET} -eq 0 ]]

then

echo "Device found - deleting device..."

cleanup_hpl

else

echo "No devices found. Proceeding to setup hpl job..."

setup_hpl

fi

else

echo "HPL job located - beginning load process..."

run_hpl

fi
fi
}

##################################################################
# Function to create the hpl load job. We first cat strings to #
# a .spec file needed initially for hpl to create the load job. #
# this file will be removed the next time this table is loaded. #
# After the .spec file is created we use onpladm to create the #
# necessary details for the table's load job. Finally we create #
# the actual job with the onpladm utility. Once a job is created #
# for a given table this function will not be called again. Once #
# the details and the job have been created we call run_hpl to #
# start the load process. #
##################################################################

setup_hpl()
{

###################################################################
# Check to see if we need to load from another database's DATADIR #
###################################################################

if [[ ${BKUPSERVER} == ${SERVER} ]]
then

echo "Local database load"

DEV_FILE=${WORKDIR}/${TABLE}.spec.load

touch $DEV_FILE

echo "BEGIN OBJECT DEVICEARRAY ${DEV_NAME} " > ${DEV_FILE}

echo "BEGIN SEQUENCE" >> ${DEV_FILE}

echo "TYPE PIPE" >> ${DEV_FILE}

echo "FILE" >> ${DEV_FILE}

echo "TAPEBLOCKSIZE 0" >> ${DEV_FILE}

echo "TAPEDEVICESIZE 0" >> ${DEV_FILE}

echo 'PIPECOMMAND "gunzip < '${OUTDIR}/${TABLE}'.unl.gz"' >> ${DEV_FILE}

echo "END SEQUENCE" >> ${DEV_FILE}

echo "END OBJECT" >> ${DEV_FILE}
else

echo "Loading from other database - calling vcs_config to set env properly."

DEV_FILE=${WORKDIR}/$TABLE".spec.remote.load"

touch $DEV_FILE

. ${INEXEC}/vcs_config.sh ${BKUPSERVER}

if [ ${ERROR_FLAG} = "Y" ]

then

echo "Unable to set environment to ${BKUPSERVER}" > ${WORKDIR}/$$_load_${TABLE}.txt

send_mail

fi

INDIR=${DATA_ENV_DIR}/informix/${BKUPDBNAME}

INFILE=${INDIR}/${TABLE}.unl.gz

echo "BEGIN OBJECT DEVICEARRAY ${DEV_NAME} " > ${DEV_FILE}

echo "BEGIN SEQUENCE" >> ${DEV_FILE}

echo "TYPE PIPE" >> ${DEV_FILE}

echo "FILE" >> ${DEV_FILE}

echo "TAPEBLOCKSIZE 0" >> ${DEV_FILE}

echo "TAPEDEVICESIZE 0" >> ${DEV_FILE}

echo 'PIPECOMMAND "gunzip < '${INFILE}'"' >> ${DEV_FILE}

echo "END SEQUENCE" >> ${DEV_FILE}

echo "END OBJECT" >> ${DEV_FILE}

. ${INEXEC}/vcs_config.sh ${SERVER}

if [ ${ERROR_FLAG} = "Y" ]

then

echo "Unable to set environment ${SERVER}" > ${WORKDIR}/$$_load_${TABLE}.txt

send_mail

fi
fi

onpladm create object -F ${DEV_FILE}
RET=$?

if [[ ${RET} -ne 0 ]]
then

echo "Problem creating device array. Will attempt to clean up job and try again."

cleanup_hpl
fi

onpladm create job ${JOB_NAME} -d ${DEV_NAME} -D ${DBNAME} -t ${TABLE} -fla

RET=$?
echo "Return code was ${RET}"

if [[ ${RET} -ne 0 ]]
then

echo "Problem creating job. Will attempt to clean up job and try again."

cleanup_hpl
else

echo "HPL job created successfully - running job"

run_hpl

fi

################################################
# Remove unneeded spec file - job is created. #
################################################
rm ${DEV_FILE} > /dev/null 2>&1

}
#############################################################
# This function is the last hpl function called, it uses #
# the onpladm utility to start the load job. Prior to #
# starting the job, this function checks to see if there #
# are any .spec files for this table that may exist from #
# the initial creation of the job. If there are .spec files #
# they are removed as they are no longer needed and easily #
# recreated if the onpload db is dropped for some reason. #
#############################################################
run_hpl()
{

echo "Begin loading the data at" `log_date`
rm ${LOGFILE}

######################################################
# The LOGFILE will be the same one #
# used for a dbload in order to avoid some confusion.#
######################################################

onpladm run job ${JOB_NAME} -fl > ${LOGFILE}

RET=$?
echo "Return code from run job is ${RET}"

if [[ ${RET} -ne 0 ]]
then

echo "Problem running job. Will attempt to clean up job and try again."

cleanup_hpl
fi

echo "HPL load job completed successfully."

################################################
# Remove unneeded spec file if it still exists.#
################################################
DEV_FILE=${WORKDIR}/$TABLE".spec.load"
rm $DEV_FILE > /dev/null 2>&1

}

########################################################################################
# We call cleanup_hpl in the event that one of our earlier onpladm commands #
# fails. If an orphan device array exists when the job has been deleted for #
# example, or any other error occurs, we try to delete all objects associated #
# with a job, including the job itself. When this function executes there will #
# be some errors that can occur, for example if we delete a job successfully and #
# all of its subcomponents, the subsequent delete device command will fail. Or if #
# a device exists without an associated job, the delete job command will fail. #
# These errors are expected and not bad. We will only attempt to perform cleanup #
# duties once. After the first trip through we set a variable ERROR_ON to 1. Subsequent#
# attempts to call cleanup will fail if this variable is set, exiting with an error. #
########################################################################################

cleanup_hpl()
{

if [[ ${ERROR_ON} -eq 1 ]]
then

systrack.sh $0 ERROR Cannot cleanup HPL onpload database

echo "Cannot cleanup HPL onpload database issues." >> ${WORKDIR}/$$_load_${TABLE}.txt

send_mail
fi

echo "Preparing to cleanup onpload database. Some errors will occur."

onpladm delete job ${JOB_NAME} -fl -R
onpladm delete device ${DEV_NAME}
onpladm delete map ${JOB_NAME} -fl
onpladm delete format ${JOB_NAME}

rm $DEV_FILE > /dev/null 2>&1
ERROR_ON=1

do_hpl_load

}

Messages In This Thread

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum is maintained by Administrator with WebBBS 5.12.