Netcool/Impact and ServiceNow!

December 6th, 2013

Have you ever tried to integrate Netcool/Impact and ServiceNow! ?

ServiceNow!… It’s an interesting piece of software I must admit and it’s in a public cloud, in Internet. You can create a lot of customizations and your own project and applications like CMDB, incident management, problem management, probably marketing people from SN! would tell you more benefits.

I’ve had a need to integrate my Impact 6.1.1 with SN! via SOAP recently and just wanted to share few general tips.

1. SN! WSDLs seem to be not correctly generated for Axis2 and must be manually adjusted before you approach compiling them in Netcool/Impact, I basically go and open my WSDL, I visit every complextType entry and remove the name parameter and value, so I leave <complextType> only. Only with this your WSDL will compile in nci_compilewsdl. Thank you Yasser from Impact dev team to pointing me to that one! 🙂

2. SN! CA-signed certificate must be imported to your ImpactProfile WAS. It’s Entrust Inc. And here’s the full instruction for Impact:

3. You’ve got a basic HTTP authentication in SN! and probably won’t be allowed to switch it off, then compilation of WSDL is possible only locally with a downloaded WSDL file (both from CLI and GUI).

4. Policy generated by wizard is good and works well, usually a single parameter being selected helps it working better since the beginning (so you generate WSParams variable correctly).

The documentation of SN! in wiki is pretty well however not everything is documented and beware of your version of ServiceNow!

Everything else depends on your needs, if this is incident management integration (opening tickets) or CMDB (importing service trees to TBSM).

That would be it. I’ll share more after I finish my integration with TBSM.


Tickets please!

July 2nd, 2013

You must have come across the tickets system integration with BSMs. No, it’s not a blog entry about municipal transportation services here in Kraków (not too bad I must say, we still miss metro / subway and it’s as good as traffic with our roads allow to, so… 😉

BSM can open tickets, independently on critical events escalation or too big amount of critical events escalation. It can be based on, say, bad service component status which is based on mixture of badly looking KPIs, alarms or all together (even based on too many open tickets, ekhm). But how to design it smart?

I’ve been working on few integrations recently and there are few good answers to this question. The first one, obvious, almost boring and very evangelistic is: TALK TO YOUR CUSTOMER. How often have you heard of that advise btw? Man, probably too many times. But it’s true. Well, not just because it’s so smart, also because when it comes to lawyers, it’s always better to have a defense line in written in your SoW. Kidding 😉 Well, kind of at least.

So Talk To Your Customer rule still applies. The thing is we need to know scenarios when exactly tickets would open. And update. And close. Opening tickets for known issues based on event catalog seems obvious – alarm fields meet ticketing criteria, BSM sends signal to open a ticket in Trouble Ticket System (aka Incident management system aka Service Desk aka Service Request system, Man, looks like a lot of synonyms were created just to prove one names giver is more precise or smarter or knowledgable than others. I almost hear: “ohohoh, good Lord, it’s so obvious they’re not the same things, you ignorant!”). Back to track. So event manager instructs the TT system to open ticket, next enriches the ticket which originated the alarm with its ticket ID and acknowledges the alarm, so operators can see that alarm has been taken care of automatically based on existing knowledge base and event catalog and there’s been even an automated,  creative (let’s say) action taken – means escalation to ticket, and field support guys are already on their way to the failing device to fix it, replace it or report some unexpected issues like lost cat walking on chassis. After work is done they report the ticket solved, incident manager approves it and alarm can automatically clear.

That’s all truth if you don’t have:

  • single point of failure, so we assume there’s redundancy or high availability for function of the device so everybody will survive without that device availability for a while
  • service level agreement in place – I just thought up that point, well let’s assume you signed a service level for your device as a resource with the device vendor or service provider who installed that device for you and is still responsible for it by power of SLA. It helps with fixing the device part, vendor’s services start their troubleshoot job from this place.
  • anything to report to anyone so you don’t need a maintenance window to run at hoc the quickest you can in order to save your results in historical reports on availability of everything that was supported by the failing device. Simpler – you don’t have to take cover by announcing a maintenance window, so you basically don’t have to manage the maintenance windows at all. Unlikely.
  • business service management in place – so you basically have no idea about dependencies on that failing device of anything and you happilly go lucky about that.

BUT. If any of situations above concern you as not true, then design of the BSM and ticket system integration will look different.

You need to:

  • define precisely conditions to open, update and close tickets – frequency, conditions, user roles, interfaces etc.
  • define data sources for tickets – tickets may open not just based on alarms, also based on KPIs combination with alarms etc.
  • understand if any tickets can be neutralized by planned maintenance
  • if there is any logical elimination of opening automatic tickets for systems which depend on the original ticketing root cause (tell root cause from symptoms)
  • understand your knowledge base to open tickets automatically vs. manually. Some tickets openings can never be smartly automated.
  • use BSM to address tickets smartly, make it depended on service owner strategy or output mixture based on availability and KPIs
  • determine tickets priorities based on signed SLAs
  • determine all dependencies of any applications or systems on the failing device, so explore an option to open tickets automatically also for dependants

There’s proably more smart advises to share, but that’s what has come to my mind quickly tonight. I welcome any more comments or suggestions about this topic. And wish all good night.

System Automation Application Manager installation adventures

May 16th, 2013


This time I’m trying to install SA AM (System Automation Application Manager) version 3.2.1 FP3 (yes, I know, 3.2.2 is available, I didn’t know it by time I started and I don’t want to give it up now) and integrate with TBSM. It’s not an easy product to install, I had to download SA AM itself, DB2 (of course) and WAS 6.1 with fix packs. You can stuck in few places, first I got in was error during installation of WAS 6.1 FP 1 I guess (WAS or so), it didn’t want to go and failed and it didn’t want to recover after all because my WAS process was still up. Even after putting it down I couldn’t recover my install and then I found somewhere, that it’s useful to change (hold on, this is Windows platform install) the windows service for WAS from automatic start up to manual and restart whole Windows box, so when you start Windows again, IBM WAS service will be down. It worked out after that! I could continue my fix pack install.

And I installed. But, it wouldn’t be so great if it worked from the beginning. I couldn’t see my SA operations console and came down to idea I should have installed SA AM 3.2.1 newest fix pack too. I found it: FP3 was there in Fix Central. OK, I started installation of WAS first, it went ok (I had to stop WAS service manually again because now after having SA AM 3.2.1 GA deployed to WAS I have SA AM Windows service too and its depended on WAS service) then SA AM FP3, and I got to another issue – profile of WAS was not augmented with ISC… What the heck? This is kind of issues I dislike the most – why installers aren’t smart enough to take all over and do ALL their job automatically for me? I have to read the manuals to fix it (and it’s not easy sometimes). To share my solution, in case someone else has no idea how to solve it:

C:\Program Files (x86)\IBM\WebSphere\AppServer\profiles\AppSrv01\bin>manageprofiles.bat -augment -templatePath “C:\Program Files (x86)\ibm\WebSphere\AppServer\profileTemplates\iscae71” -profileName AppSrv01INSTCONFSUCCESS: Profile augmentation succeeded.

Important thing is selecting right template, since we have to augment the profile (which is standalone profile on WAS server typically for SA AM) with ISC Advanced Edition.

To make it funnier, I found augmenting doable only via CLI script, if I try to run Profile Management Tool it wants me to create a new profile instead of going straight to existing default AppSrv01 profile. Weird.

OK I installed the fix pack and started SA AM. Now The Operations Console doesn’t start but with a message:
An error has occurred while connecting to the end-to-end automation manager component running on the management server.
Possible causes:
1) The management server is down.
2) The automation J2EE framework (Enterprise application EEZEAR) is not started.
3) The are inconsistencies regarding the level of the operations console and the end-to-end automation manager.
4) You are not authorized to access the automation J2EE framework.

Weird. Indeed EEZEAR enterprise application is down and I cannot start it with wasadmin. When I try tough, it says:

EEZEAR failed to start. Check the logs for server server1 on node saam321winNode01 for details.

An error occurred while starting EEZEAR. Check the logs for server server1 on node saam321winNode01 for more information.

I can see errors in SystemOut.log like: Access Denied. It’s weird, EEZIMEAR looks going well.

Found this: SA AM will not start if Java 2 security is enabled. It enables automatically in ISC when you enable the global security. I didn’t notice or didn’t pay too much attention to it while enabling the security before SA AM installation (prerequisite). Well, I switch it off and restarted WAS (you have to do it) and the application went fine!

And then my eezadmin user could finally access the legendary SA Operations Console! It exists, it’s true!

Wow, so much hassle.



Netcool OMNIbus WebGUI 7.3.1 FP6 plays Java

May 14th, 2013

I’ve been trying to install OMNIbus WebGUI 7.3.1 Fix Pack 6 and the fix pack is not providing with installation script with relevant path to Java executables.

Obviously I didn’t read the readme first, so I was made to experience on my own skin various problems because of that:

1. IBM Java available within TIP cannot be used. It’s starting from within IAGLOBAL_TIP_HOME directory which in my case is standard /opt/IBM/tivoli/tipv2 and as such acts as Java process which according to the installer can be TIP process itself, the installer doesn’t recognize if this is TIPProfile or anything else, it will just stop by saying: I found another Java process in TIP HOME, please stop all processes in TIP HOME!

2. ITM Java couldn’t be used too. It’s TBSM Agent I have here plus ITM 6.2.3 for Linux agent which runs Java 1.5. It’s not supported.

3. I couldn’t use my JRE for Java plugin in WebGUI or TBSM Dashboard server. It’s not supported too.

The only one Java that works in hidden deep in installation directory structure, and Readme will tell you its name:


Nice. Why isn’t it passed in the

Hypothetical question.

New Tivoli Integrated Portal

May 13th, 2013

From some time new version of Tivoli Integrated Portal, cutely referred to as TIP, is available and numbered (Reminder, so called TIP 3.x is not TIP in fact, it is DASH, means Dashboard Application Services Hub and even though it works on same components, it should be considered as another thing).

What does it all mean to TBSM and TCR and WebGUI administrators?

In order to install the top code level available at the moment (today is 13th of May 2013), you need to download and install:

1. TBSM – I make assumption it’s the code level for most of users yet – which installs with Netcool WebGUI and TIP and Impact GUI Server 6.1.0.

2. TCR – which runs TIP 2.1 itself but can install over TBSM code with TIP with no issues.

3. TIP fix pack – I suggest installing it before trying later TIP fix packs first. FITSuit is prerequisite to this one.

4. TBSM 6.1 Fix Pack 1 – this will install Dashboard Server component upgrade which does require minimum TIP

5. Netcool OMNIbus WebGUI 7.3.1 FP6 – it is prerequisite to TIP

6. TIP – with relevant FITSuit – on top of everything

If You start from TCR 2.1 installation and want to upgrade to TIP, you need to start from TIP 2.2.0 refresh pack (and FITSuit) and continue straight with TIP after all (assuming you have no WebGUI).

Here’s short list of compatible (“certified”) components the new TIP fix pack can install on:

Tivoli Integrated Portal FixPack Installation
PASS: Installed TCRStandalone is certified with tip2.2.0.11.
PASS: Installed TBSM is certified with tip2.2.0.11.
FAIL: Installed OMNIbusWebGUI is not certified with tip2.2.0.11. Certified versions:,,




TBSM 6.1.1 database installation gotchas – real disk space requirements

May 1st, 2013

Have you ever tried to install TBSM 6.1.x on non-single partition disk space? It’s easy, you usually go for it on your demo VMware image, and it can be a production case too. It usually looks like this after all the installations (in this case we have logical volume to extend at any time by adding some more disk space):


Picture 1. Regular disk partitions layout on Unix-alike system, easy and boring.


Piece of cake. Let’s try something harder. Typically AIX systems storage is extensively divided on numerous disks in farm or Power System storage. Nevertheless a similar scenario can be easily emulated on your VMware box, assuming you create number of independent disks during your virtual OS installation, like in the example below:

Picture 2. Creating non-single-disk-based disk space in virtual Linux.

Picture 2. Creating non-single-disk-based disk space in virtual Linux.

The goal is to achieve the following setup. I have home directory on one disk (sda5), opt directory (historically for “optional” packages, it’s default directory for most of Tivoli packages on Linux platform) on another disk (sda2), tmp directory on sda6 and root directory on sda1. See this:


Picture 3. 6 successfully installed virtual disks on Linux

Now let’s follow the available disk space degradation as we install next components of TBSM. We start from DB2 9.7 installation.

Picture 4. Disk space after installation of DB2 database manager code.

Picture 4. Disk space after installation of DB2 database manager code.

It looks quite obvious, DB2 default installation directory seats in /opt directory, which mounts sda2 disk. You can check for exact space value taken by DB2 database manager, by running this command:

Du –m /opt/ibm/db2/V9.7


If you compare to disk space consumed by DB2 instance itself, you’ll see consumption in /home directory, this is because I installed default DB2 instance on Linux platform, called db2inst1 which uses db2inst1 user to run and this user’s home directory to store the data files. Here’s how fresh DB2 instance impacts the available disk space on my Linux vm:


Picture 5. Disk space after configuration one default DB2 database instance.


So the disk space taken by DB2 instance files (and don’t forget you have to create fenced user for running stored procedures and at least one administrative user for managing all instances, so db2fenc1 and dasuser1) is about 80 MB.

I’m sorry for taking you so far to this place to tell you this. It’s not going to work with your default TBSM installation. Why? It’s too little space for DB2 instance data files. If you continue with TBSM database installation, you’ll see these sizing options:

Picture 6. TBSM database installation size options.

Picture 6. TBSM database installation size options.

According to the documentation for TBSM 6.1, available here, You need to secure the following disk space:

  • Small – 3G of disk space
  • Medium – 6G of disk space
  • Large – 10G of disk space

Well, it’s maximum limits, someone may say. If you plan for running just a simple demo, it’s not going to break anything. This is true, however would you assume that you needed to monitor disk space consumption especially because of that? If yes, good for you, skip it. If not, consider this. Why? Let me show you what’s going to happen and why you don’t want to use your default settings during TBSM installation, related to data paths and log paths

Let’s assume we simply continue the installation with defaults. Database configuration, especially transaction log files size for TBSM database won’t accept your disk space offer. This is how your installation is going to finish, if you continue with it like presented above, it will simply fail:

Picture 7. Example of failure message during TBSM database installation if disk space is too low.

Picture 7. Example of failure message during TBSM database installation if disk space is too low.


The same for TBSM Metric Marker tables and demo tables. If you go to  /opt/IBM/tivoli/tbsmdb/logs/db2_stdout.log log file, you’ll read:


DB21034E  The command was processed as an SQL statement because it was not a

valid Command Line Processor command.  During SQL processing it returned:

SQL0968C  The file system is full.  SQLSTATE=57011


Off the record, after that entry in the log you can see that the installer tries to continue executing the DDL script without validation of available disk space based on the first failure occurrence, what starts all series of unfortunate events in consequence and doesn’t allow your installation succeeding at the end.

So it looks bad. Let’s see the disk space now:

Picture 8. Disk space after failing installation.

Picture 8. Disk space after failing installation.

Well, it doesn’t look that bad now after it failed, we still have some space available, it means that creating the tablespace must have used up all the space first, then failed and then rolled-back and so it released some of taken disk space after all. But what happened? Well, after TBSM database migrated to DB2 from Postgres in version 6.1.0 (previous TBSM 4.2.1 – there is no TBSM 5.x – was using Postgres 8.x) it became exposed to all those challenges and gains coming from the fact. In order to understand it step by step we need to get closer to few DB2 design assumptions. Here’s couple of settings which should draw your attention:

Picture 9. Default buffer pool settings for TBSM database.

Picture 9. Default buffer pool settings for TBSM database.

So, this is buffer pool setting first. Each row in each table is being cached in memory in objects called pages (number of rows) every time when DB2 has to read that data from external memory, means disk. Also, if data is to be written to disk from buffer pool to table space, you can calculate the maximum amount of data in 16K buffer pool will be 48MB and 32MB in 32K buffer pool. This is not something dangerous to your installation yet. It tells you about potential RAM memory consumption in future when you launch TBSM into production mode. But the screen tells you also about table spaces being created for TBSM database, let’s take a closer look on them.

Note, if you actively use TBSM Metric Marker and Metric History databases, they have their separate settings and disk and memory consumption rates.

From this command:


You know that TBSM database has automated storage management which means that database manager creates new data containers if only needed. It is not System Managed Space (SMS) type of table space or database managed space (DMS) type. It means maximum capacity data can reach is defined by storage size while creating TBSM database, determined by the path. In the screenshot below you can see that TBSM database path is <default>.

Picture 10. Sample configuration of TBSM database, it shows default database path.

Picture 10. Sample configuration of TBSM database, it shows default database path.

What <default> is? Well, go to your DB2 command line, and run as DB2 instance owner the following command:

db2 get dbm cfg | grep DFTDBPATH

By default it is instance user’s home directory, in my case: /home/db2inst1.

It means your database will grow as long as it hits the storage limits, it means endlessly until some free disk space remains. It’s something good to remember, if didn’t realize or didn’t have clear answer. It means you don’t want to select <default> during your TBSM database installation, you rather want to check on other disks space and allocate TBSM data files there.

Next, lets see the transaction logs, like below. These steps lets you define how many and how big logs can be created for you TBSM database. Again – for TBSM Metric Marker and history databases it’s totally a separate story. So by default you define 10 primary logs and 2 secondary logs, 16000 4K pages big each, means up to 12*16000 = 192000 4K pages, means 768 MB of data max. Note, on Unix and Windows platform, the default log file size, both for primary and secondary is 1000*4K and range 4 – 1 048 572 (always *4K page size). This space gets allocated when only your database activates. It means you need to have this space available on your hard disk immediately when you start your database manager. Again, logs, similarly to data files, by default utilize the default log path name, see the screenshot below. By default it means it all goes to /home/db2inst1 directory again:

Picture 9. Default buffer pool settings for TBSM database.

Picture 11. Transaction log for TBSM database configuration snippet.

What to do then? Again, if you miss space in your /home directory mount, select another value for Log path name. Be knowledgeable,  know what it all means to your installation. Take the installation hardware requirements seriously and monitor the usage.

Last but not least, TBSM DB directory which is used to store TBSM DB DDL files, executables to recreate TBSM DB and jars etc. takes it piece of cake too, it’s 160 MB declared by the installer, and you can see it by Summary step:

Picture 12. Summary view

Picture 12. Summary view


Don’t forget about temporary disk space the installer takes and returns, but it must be available for installation time. The installer is flexible and will look for 200 MB in /tmp or /home directory of the user who runs it, in our case db2inst1 home directory (I have only 170 MB available in /tmp):

Picture 13. /tmp disk space is too low

Picture 13. /tmp disk space is too low


I’ll need more than 1 GB disk space that I assigned to /home directory to install TBSM database successfully and the Install Guide is not specific about that.

This is because by default my data logs and transaction logs go to /home/db2inst1 directory and this is because I didn’t simply create one single disk space for all directories, which can be a real case in production environments too. Additionally all the temporary files of the installer will be copied there for TBSM DB installation time.

So what is the real hard disk space requirement for TBSM DB installation?


a) at least 1x768MB for transaction logs (10 primary, 2 secondary, 16000 4K pages each, all in one TBSM database – means you don’t create separate TBSMHIST db)

b) 80 MB for fresh db2 instance installation

c) 200 MB for temporary files in case you don’t have space in /tmp

d) suggested 3 GB for up to 5000 service instances

Total: 4048 MB minimum in /home



a) at least 160 MB for tbsmdb installation in /opt/IBM/tivoli/tbsmdb

b) 871 MB for DB2 database manager in /opt/ibm/db2/V9.7

Total: 1031 MB minimum in /opt


Keep in mind this is still before installing TBSM dataserver, dashboard server, JazzSM, XMLtoolkit and ITMAgent for TBSM or embedded Netcool OMNIbus with EIF probe.

So, if you’re lucky and have one single / root partition only for all your files in your Linux or Unix box, prepare minimum 5,1 GB disk space for TBSM database preparation total. If you only get tempted to create TBSMHIST table, it will be next 768 MB for separate transaction logs. You can lower number of logs or decrease single log file size to accommodate it though. You’ll be 200 MB ahead if you secure enough space in /tmp directory (I put it 500-1000 MB to be safe usually).

That’s all for now, thanks and see you next time.



KPI disappearing? No way!

April 27th, 2013

Look at this article:

Fair enough prelert tries to make their money but I spot this:

“KPI’s probably won’t disappear any time soon, but predictive analytics tools for IT operations, such as Prelert’s, will increasingly become indispensable for predicting and tracing the roots of performance issues.”

KPI can be anything, I just invented a new one: time to find a needle in haystack, let’s call it short: Time To Needle.

Hence, I think KPIs will stay with us forever. There always will be performance of something to measure and express.


What is the important thing in GUI?

April 27th, 2013

I just read this article:

After reading the whole article out, I started wondering what is the best thing I like in Windows XP the most.

It’s GUI.

There’s something in common in Windows XP, Samsung TouchWiz or LG Smart TV GUI (its biggest sense can be seen in Home Dashboard) are EASY and SIMPLE or CLEAR, English dictionary suggests me to say legible. This, plus the fact those GUIs colorful in kind of elegant way (to me, some people may see they’re kitch) makes me to stay and use it and enjoy.

I’ll never forget those long hours spent on freshly new installed Windows XP systems, while working with HTML, JavaScript and PHP pages back in 2012/2013. So modern, user friendly and so cool Windows XP seemed to be so was the work with tools and apps. On top of that, music being played in WinAmp was adding a charm to those long evenings and nights over my computers. In fact, there’s kind of sentiment in my generation and older too – Windows XP was the first one in Windows series which I didn’t have to reinstall at least annually.

And now, well, XP is going to sunset in 12 months. Pity. Sentiment is there. And just a greatly simple GUI is something I cannot understand why MS gave up.

XMLtoolkit stop issues

March 28th, 2013

If this ever happened to you, that XMLtoolkit doesn’t want to stop normally or gives you other issues related to creating connection to itself, it must be registry error.

Here’s the symptom:


[netcool@tbsm61 bin]$ ./

GTMCL5478W: The request could not be delivered, the toolkit may be down. If the toolkit is busy processing data, allow it to complete and shutdown gracefully. If the toolkit is idle but will not stop, reissure the request with the -f flag.

The exception was: Exception creating connection to:; nested exception is: No route to host

retCode: 4


So I try to stop my XMLtoolkit instance and the script fails and the toolkit itself keeps running.


This is mainly because of XMLtoolkit failover capability feature. Each instance registers itself by IP of the machine it runs on, and it’s usually IP configured for the first network interface. You can check on this at any time:


[netcool@tbsm61 bin]$ ./ -U db2inst1 -P smartway -v

GTMCL5457I: Toolkit registry table information.

ID: 1


Primary: true

Action: 0

LastUpdate: Thu Jan 01 01:00:00 CET 1970

ID: 2

Name: null

Primary: false

Action: 0

LastUpdate: Thu Jan 01 01:00:00 CET 1970

GTMCL5358I: Processing completed successfully.

retCode: 0


The corresponding value is written to under DL_Toolkit_Instance_ID property.


It may happen especially on a virtual machine, which is being reconfigured as moved to new networks. There’s quick remedy for this: update DL_Toolkit_Instance_ID property with a new value, like static IP or a unique hostname in file and register that value in the database:


[netcool@tbsm61 bin]$ ./ -U db2inst1 -P smartway -s 1

GTMCL5458I: Setting the toolkit registry table

GTMCL5358I: Processing completed successfully.

retCode: 0

[netcool@tbsm61 bin]$ ./ -U db2inst1 -P smartway -v

GTMCL5457I: Toolkit registry table information.

ID: 1


Primary: true

Action: 0

LastUpdate: Thu Jan 01 01:00:00 CET 1970

ID: 2

Name: null

Primary: false

Action: 0

LastUpdate: Thu Jan 01 01:00:00 CET 1970

GTMCL5358I: Processing completed successfully.

retCode: 0


The second run of the script with associated –v flag will help you verify if value was set ok.


Now it’s time to stop the toolkit without issues:

[netcool@tbsm61 bin]$ ./

GTMCL5443I: The script toolkit_stop.xml has been submitted for processing.

retCode: 0

And this is it.



Business Service Composer – pain in the… artifact?

March 22nd, 2013

I’ve been starting actively using Business Service Composer for creating my service trees in TBSM It’s really powerful tool, that helps you doing a lot of things with service structure, regardless component CDM classes (or any classes in any namespace, regardless their origin), however with one big glitch – it’s a huge step back in single administration console approach by deselecting TIP to be portal of choice in Tivoli and using Java GUI instead. There are two conveniences:

– If you really want to enjoy BSC GUI – don’t use it via ssh tunnel from within your Putty session to TBSM server (MS Windows users). Loosing focus on buttons or just selected menu items is so frequent that more times it doesn’t work and confuses than it works ok. It’s a nightmare, let’s put it straight. Just run it on your local workstation instead, it will save you a lot of nerves and time.

– But – once you run it on your desktop, you’ll have to come over the pain of updating project files via SCP (Windows users) there and back again in order to upload any single change to your static resources or policies.

It would probably be better if you run it all mounted locally from a remote site, where projects directory remains already in the right place for xmltoolkit script to pick it up and load it to artifacts without all that copy nightmare.

This is it. Powerful, conditionally usable, still some way from a nice tooling. And has nothing to do with TBSM Service Editor. Confusing? We have application descriptors and templates to model a dependency of CIs to applications first, then we have some functions of TADDM GUI, then we have BSC and at the end we have TBSM Service Editor. Confusing? It all works and has special use, but I’d have piece of a better advice for development. And I promise to share it.