Creating an Automation Script End Point in IBM Maximo

The IBM Maximo Integration Framework offers a wide variety of capabilities for publishing or consuming information to or from external systems. Some of the available methods for communication are:

  • Flat File Exchange
  • XML File Exchange with XSLT Mapping
  • Database Table Exchange (both internal and external to Maximo)
  • HTTP or SOAP Service Consumption

However, there may be times when the available end point handlers do not fit exactly what you need. What options do you have to customize the way the data is exchanged (method, format, content, etc.)?

1. Java customizations: This is old and tired. However, it is possible to write your own Router Handler class to deliver an outbound message to its destination. It is also possible to deliver the message using one of the available handlers above, and then write a custom IBM Maximo cron task to move the message to its destination. For me, that is too many moving parts.

2. Third-party solutions: Implementing a third-party middleware solution, such as Node Red or MuleSoft, can be an option for removing specific system differences when exchanging information between two systems. This is especially useful in large enterprises where many systems are exchanging data. This can allow for a single source of business logic to exchange information across a wide range of systems. For smaller integrations however, this can just add another layer of management and skills necessary to support the organization.

3. Automation Scripts: This is an easy, simple way to build your own end point handler in IBM Maximo without introducing unnecessary Java customization. Luckily, IBM provides a hook that allows us to do just that.

To create an Automation Script End Point in IBM Maximo, you will first need to register the Script Router Handler class. Please note that this class was introduced in the IBM Maximo 7.6.0.8 release. In the 7.6.1.1 release, the SCRIPT handler was added by IBM, which means that these steps can be skipped if you are on 7.6.1.1 or later. If you are running a version between 7.6.0.8 and 7.6.1.1 you can follow these steps to create the Script Router Handler in IBM Maximo:

  1. Log into IBM Maximo as an administrator
  2. Navigate to the Integration > End Points application
  3. Under the More Actions menu, choose the Add/Modify Handlers option
  4. Click the New Row button
    a. Handler: SCRIPT
    b. Consumed by: INTEGRATION
    c. Handler Class Name: com.ibm.tivoli.maximo.script.ScriptRouterHandler
  5. Click the OK button

end point-autoscript-IBM Maximo-mif-automation scripting-integration-scripting-python-code-script-Maximo-customization

Once you have the SCRIPT handler, you can use it to register a new End Point in IBM Maximo:

  1. Log into IBM Maximo as an administrator
  2. Navigate to the Integration > End Points application
  3. Click the New End Point button
    a. End Point: TEST-SCRIPT
    b. Description: TEST SCRIPT HANDLER END POINT
    c. Handler: SCRIPT
  4. After populating the Handler with SCRIPT, a SCRIPT property will appear
    a. Value: <The name of your Automation Script>, e.g. TEST-SCRIPT
  5. Click Save

end point-autoscript-IBM Maximo-mif-automation scripting-integration-scripting-python-code-script-Maximo-customization

At this point, you have an End Point that calls an Automation Script that has not yet been created. The next step is to define that Automation Script and implement your own logic:

  1. Log into IBM Maximo as an administrator
  2. Navigate to the System Configuration > Platform Configuration > Automation Scripts Application
  3. From the More Actions menu, choose the Create > Script option
  4. In the ensuing dialog, enter the basic script information:
    a. Script: TEST-SCRIPT
    b. Description: TEST AUTOMATION SCRIPT FOR END POINT
    c. Language: python
  5. Enter the source code from below and press the Create button

end point-autoscript-IBM Maximo-mif-automation scripting-integration-scripting-python-code-script-Maximo-customization

Source Code:
# ------------------
# This script will write a message to the file system 
# and FTP the file to another server for processing.
# 
# Implicit Variables:
#   INTERFACE - the name of the triggered Publish Channel
#   requestData - the message payload
# 
# Alex Walter
# alex@a3jgroup.com
# 21 JAN 2021
# ------------------
from java.io import File
from java.io import FileWriter
from org.apache.commons.io import FileUtils
from org.apache.commons.net.ftp import FTPClient
from org.apache.commons.net.ftp import FTPReply
from psdi.util.logging import MXLoggerFactory

logger = MXLoggerFactory.getLogger('maximo.script.a3jtutorial')
if logger.isDebugEnabled():
    logger.debug('Starting TEST-SCRIPT script')
    logger.debug('INTERFACE: ' + str(INTERFACE))

# If the file exists, then delete it and create it new
ftpFileName = "C:\Temp\ftpfile.xml"
ftpFile = File(ftpFileName)
if ftpFile.exists():
	ftpFile.delete()
ftpFile.createNewFile()

# Write the file to disk
fileWriter = None
try:
	fileWriter = FileWriter(ftpFile)
	fileWriter.write(requestData)
finally:
	if fileWriter:
		fileWriter.close()

# Setup FTP variables - usually good idea to create System Properties
ftpHostName = "ftp.company.com"
ftpUserName = "username"
ftpPassword = "password"

ftpClient = None
fileInput = None
try:
	# Make an FTP connection
	ftpClient = FTPClient()
	ftpClient.connect(ftpHostName)
	reply = ftpClient.getReplyCode()
	if logger.isDebugEnabled():
		logger.debug('ftp reply: ' + str(reply))
	
	if not FTPReply.isPositiveCompletion(reply):
		if logger.isDebugEnabled():
			logger.debug('not a positive ftp reply')
		ftpClient.disconnect()
	else:
		if logger.isDebugEnabled():
			logger.debug('positive ftp reply!')
		# Log into the FTP server
		if ftpClient.login(ftpUserName, ftpPassword):
			if logger.isDebugEnabled():
				logger.debug('Logged into FTP site')
			# ftpClient.setFileType(2);
			fileInput = FileUtils.openInputStream(ftpFile)
			# Put the file in the default directory
			ftpClient.storeFile(ftpFileName, fileInput)
			if logger.isDebugEnabled():
				logger.debug('File sent to Server')
		# Log out
		ftpClient.logout()
		if logger.isDebugEnabled():
			logger.debug('Logged out of FTP site')
finally:
	if ftpClient:
		ftpClient.disconnect()
	if fileInput:
		fileInput.close()

 

NOTE: If you need help installing this automation script in your IBM Maximo, or have questions with other Maximo CMMS configurations don’t hesitate to reach out! Leave a comment below, or email info@a3jgroup.com

 

Top 6 Things To Check When Your Integration Is Not Working

In the spirit of all those top however many number lists we see populate the internet these days I have complied a top 6 list of things to check if your integration is not working in Maximo. So, no, this is not an article on how to set it up and if you follow these steps it all works great! This is the uncommon article that addresses what to do when something might have gone wrong.

Number 1: Logs, Logs, Logs, Logs, Logs

The very first thing you should check when anything in Maximo is not working are the Logs.  Specifically, the WebSphere SystemOut.log and the SystemErr.log logs.  These files are a running tally of what is going on in Maximo and a running list of what is erroring in Maximo.  They are the first source to find out what is actually going on in the system.  Also, if you need something more specific, go to Maximo’s logging application and look up the Integration Root Logger.  Set it to DEBUG (remember to go back and set it to a lower level) then get those logs ready and test your integration!

Number 2: Admin Mode is on

I know, everyone knows this one. It’s that old saying that is redundantly used and we have heard any number of times in life.  “Did you do/check this?”  Just double check it, especially if the integration was worked on recently.  You and your colleagues may have been working in the system as well as on the integration and it could have been turned on.  As you know the Cron Tasks are not going to run with Admin Mode on.

Number 3:  Cron Tasks are not running

Many, many working parts go into the creation of an integration.  Cron Tasks are an integral part in that.  They control the processing of queues or the processing of flat file transfers.  Checking the JMSQSEQCONSUMER Cron Task if you are using WebSphere messaging queues or one of the file consumer (FLATFILECONSUMER and XMLFILECONSUMER) Cron Tasks is one of the first places to check if things are not processing.  If you are in a clustered environment check to make sure the Cron Task is not stuck on a specific server.  Do not runs may be set up and the server the Cron Task is running on may not be doing it.  One of the very reasons I wrote this article stemmed from a general discussion we had where an integration was not running.  A question was asked “Could the Do Not Run system properties be in the maximo.properties file?”  That’s right folks, those can be passed to the system through the properties file and it is quite possibly the trickiest place to find something stopping an integration from working.  Cron Tasks can get stuck and can sometimes stop running at their designated time.  Stopping and starting them is the first thing to try, but sometimes you need to check that server to see why the Cron Task may not be running.

Number 4: WebSphere messaging queues

The WebSphere messaging queues go hand in hand with the Cron Tasks for processing or checking the queue traffic.  Many times, your Cron Tasks are running, but you see no changes to the data in the system.  The queues may be receiving data but not pushing them out of the queue itself and they are piling up messages.  Checking your queues can be done in two places.  First and foremost, the queues can be checked under the intjmsbus in WebSphere (you remember those buses you set up right.)  If you built out your integration with more than one intjmsbus then you will have to check the other ones you have created to confirm where the messages may be sticking in the queues.  If you just created one intjmsbus then you will only need to check one place.  In trouble shooting a tricky integration that does not want to work this is where creating more than one intjmsbus can be tremendously helpful in pinpointing where a breakdown may be as well as limiting the effect of a poorly performing integration.  I would highly recommend this strategy in a Maximo environment that would be clustered and built to handle integrations.  When you navigate into WebSphere and look at the queues for your bus or buses if you see one (whether its inbound or outbound) with the numbers piling up that is where the “bottleneck” is.

That is not the only place to check the queues.  One can check the queues that have been used in an integration in Maximo under the Integration’s module.  This leads me into the last section which is a kind of an add on from WebSphere’s queues, but these things tend to build from Cron Tasks to Queues, to application message tracking and processing.

Number 5: Message Tracking and Reprocessing

In the Integrations module whether you set up a Published channel or an Enterprise Service under the More Actions menu you will see a choice for Message Tracking.  Once you have opened the dialog box and checked off the Enable Message Tracking option Maximo will retain a record in the database.  This is very useful option for tracking what it is going on with your integration.  In the External Systems app of the Integration module if you build out an External System with you Published Channels or Enterprise Services under the More Actions menu you have the option of choosing Add/Modify Queues.  This will show you what kind of queues have been built out in WebSphere and can show you the same data that is buried in the intjmsbus of the WebSphere administration site.  This menu choice has the ability to allow you to see message data as well as clear queue data.  So, you can operate your message queues from Maximo much the same way you can in WebSphere.  Message Tracking goes hand in hand with Message Reprocessing, which will work in with your queues the way you set them up.  If you built the queue to try to process the message 5 times it will try to process a message that many times before dropping it, and you can see the messages awaiting reprocessing in the Message Reprocessing application.

Number 6: Endpoints

Endpoints and their handlers are what actually deliver the messages to the destination side of an integration.  One of the things that trips people up is managing the different Endpoints between the environments.  Meaning DEV, TEST, and PROD environments usually will have different systems they are bound to and when creating your integrations each environment points to a unique server acting as the Endpoint for an integration.  Getting your Endpoint set up can be a challenge but pointing one system to the other system’s Endpoint will just result in questions of why this integration is not working.

 

Each one of these topics can get much more in depth and I encourage everyone to dive right into that depth, but the next time you’ve got an integration not working think about these points and see if that may be affecting your integration.

Updating End Points and Reloading Cache in Automation Script

In Maximo you can use End Points to point to External APIs to pull in data. You can invoke those End Points in a variety of ways, including inside an Automation Script. Take the following code as an example:

handler = Router.getHandler('ENDPOINTNAME');
responseBytes = handler.invoke(null, null);

You can then parse the response that comes back and use that data in any way that meets your needs. Recently we were working with an external API that required an initial login to capture an API Key to be used with future calls. We called the Login API and then grabbed an API Key from the response object. We then needed to update a different End Point to utilize this API Key as an HTTP Header in future calls.

This appeared to work in Maximo as we would see the data show up on the updated End Point. However, each time we invoked the End Point it would throw an error that it was not able to connect. We would then restart Maximo and everything would start working. What we found is that End Point data in Maximo is cached. After updating an End Point, we needed to reload the End Point cache to make a successful connection. To do that we added a line in the automation script after we updated and saved the End Point with the new API Key. The line that is needed is:

MXServer.getMXServer().reloadMaximoCache(EndPointCache.getInstance().getName(), true);

After running this code we can now successfully call our updated End Point. Be aware that there are several similar objects that are cached when Maximo starts up, such as Relationships, Integration information, some Domain information, etc. If updates are made to those records, you may need to refresh the Maximo cache for those records as well.

Hope this helps!

Maximo JSON API: An IoT Example

I’ve had a lot of conversations recently with folks attempting to implement more Condition Monitoring within their organization. The benefits of shifting from time-based maintenance to condition- or use-based maintenance are well documented and very real. However, making that shift involves a fair amount of planning, analysis, and technology. This article will show an example of how to bridge a small portion of the technology gap; specifically we will focus on creating meter readings in Maximo via the JSON API available in Maximo 7.6.0.2 and higher.

Meter readings are the heart of how Condition Monitoring is implemented in Maximo. They represent a piece of data at a point in time associated with an asset. This data could be related to the asset’s condition such as temperature, pressure, voltage, etc. This data could also be related to the asset’s usage such as an odometer reading, the number of cycles of the asset, etc. Lastly, the data could be based on a simple value list such as Pass/Fail, Open/Closed, Blowing / Not Blowing, etc.

To create a new meter reading via Maximo’s JSON API, let’s start with an example:

POST http://mxapprove.a3jgroup.com/maximo/oslc/os/mxasset/_QkVERk9SRC8xMTQ1MA==?lean=1
maxauth: c21pdGg6c21pdGgx
Content-Type: application/json
properties: *
x-method-override: PATCH
patchtype: MERGE
{
    "assetmeter": [
        {
            "metername": "O-PRESSUR",
            "linearassetmeterid": 0,
            "newreading": "4900"
        }
    ]
}

Let’s look at this part-by-part. First, we start with the HTTP POST itself.

POST /maximo/oslc/os/mxasset/_QkVERk9SRC8xMTQ1MA==?lean=1

This message needs to be an HTTP POST (not HTTP GET). The http://mxapprove.a3jgroup.com/maximo portion of the URL is your Maximo environment’s URL. Substitute that for your own environment’s URL. The /oslc part of the path represents the usage Maximo JSON API. The /os part of the path tells the API that the next part of the path (in our example is /mxasset) will be an Object Structure in Maximo. Finally, the /_QkVERk9SRC8xMTQ1MA== represents a unique identifier for an Asset record that can be referenced in the MXASSET Object Structure. This identifier can be found be querying for the asset using an HTTP GET to the same MXASSET Object Structure, or it can be derived by base64 encoding the asset’s SITEID + “/” + ASSETNUM and prefixing that string with an underscore (“_”) character. The ?lean=1 part of the path allows us to not have to specify namespace prefixes in the body of the request.

Next are the various headers.

maxauth: c21pdGg6c21pdGgx
Content-Type: application/json
properties: *
x-method-override: PATCH
patchtype: MERGE

The maxauth header represents the credentials that are being used to broker the transaction. The value is a base64 encoded string of the format USERNAME + “:” PASSWORD. In an LDAP environment, switch the maxauth header to Authorization and prefix the string “Basic ” to the base64 encoded value. The Content-Type header tells the HTTP request that the body of the message will be in JSON format. The properties header tells the request which fields from the MXASSET Object Structure should be sent back in the HTTP response, with the * character representing all fields in the object structure. The x-method-override header with a value of PATCH tells Maximo that this will be an update to the asset record, and the patchtype header with a value of MERGE tells Maximo to add a new record while keeping the other ASSETMETER records. Without that header, the integration will replace all of the ASSETMETER records with the list in the message.

Next is the body of the message.

{
    "assetmeter": [
        {
            "metername": "O-PRESSUR",
            "linearassetmeterid": 0,
            "newreading": "4900"
        }
    ]
}

Note that we don’t need any identifying information about the asset in the message body, such as the ASSETNUM or SITEID. This is because we are required to specify the unique identifier in the URL string. We specify the assetmeter key as an array of meters associated with the asset. For this example, we are only updating a single meter against the asset (Outlet Pressure). In our object, we specify the Meter Name and New Reading value. The linearassetmeterid key with a value of 0 is necessary due to that field being part of the unique identifier on the ASSETMETER table. Values such as the New Reading Date and Inspector will default based on the current date and the logged in user credentials, but can also be specified explicitly in the message.

Please feel free to leave questions or comments below. Good luck in your IoT journey!

Call Maximo Automation Scripts from JSON API

It was not until I read chapter 14 of IBM’s overview of the Maximo JSON API that it occurred to me that we could use the JSON API to launch an automation script in Maximo and then see the results in the return message. We can create our own APIs using the relative simplicity of an automation script and not have to write a single line of Java code. Truly powerful stuff!

The example provided in the article was also useful to me in that I was going through the process of loading data into Maximo when I came across the article. The script used in the article queries Maximo objects and reports back a record count. In my situation this was a very poignant and timely revelation.

Creating the Script

The first step in this process is to create an automation script that we want Maximo to run. I took the script provided in the example and expanded the list of objects to suit my situation. Besides the additional objects I wanted a total count of records that were loaded which is the purpose of the “Get Total Count” section at the end of the script.

Source Code:

importPackage(Packages.psdi.server);
// Create the response object
var resp = {};
// Get the Site ID from the Query Parameters
var site = request.getQueryParam("site");
// Count of Work Orders
var woset = MXServer.getMXServer().getMboSet("WORKORDER", request.getUserInfo());
woset.setQbe("SITEID","="+site);
var woCount = woset.count();
resp.wo_count = woCount;
woset.close();
// Count of Service Requests
var srset = MXServer.getMXServer().getMboSet("SR", request.getUserInfo());
srset.setQbe("siteid","="+site);
var srCount = srset.count();
resp.sr_count = srCount;
srset.close();
// Count of Items
var itmset = MXServer.getMXServer().getMboSet("ITEM", request.getUserInfo());
var itmCount = itmset.count();
resp.item_count = itmCount;
itmset.close();
// Get Total Count
resp.total = woCount+srCount+itmCount;
var responseBody = JSON.stringify(resp);

Navigate to the Automation Script application and select Create and then choose Script.


Since we are going to be launching this script from an HTTP call we just need to create the script and not provide a separate launch point.

Fill out the script Name, Description, Language and the Source Code from above.

Make sure the script is Active and select Create to save the script.

Executing the Script

We need three pieces of information to complete a successful JSON API call:

  • Site ID
    • In this example we are using the demonstration data, so our site is BEDFORD.
  • Username and Password
    • User Name: maxadmin
    • Password: maxadmin
  • URL
    • http://maximo_host/maximo/oslc/script/a3j_recordcount?_lid=maxadmin&_lpwd=maxadmin&site=BEDFORD
    • Note that there are underscores in front of lid and lpwd which might now be obvious from the link formatting above.

Script Results

{
"wo_count": 1331,
"sr_count": 41,
"item_count": 357,
"total": 1729
}

Disable Anonymous Integration Access to Maximo

In older versions of Maximo, the product would ship with the ability to process inbound messages through an anonymous super user account when the system was configured to use native Maximo authentication. The user MXINTADM, which comes configured with MAXADMIN privileges when Maximo is installed, would be used as the default account to broker any inbound traffic through the Integration Framework. This made integrating with other systems via Web Services or standard HTTP very easy; simply supply a properly formatted message to the integration framework and you can do just about anything in the system. Common tasks such as creating Service Requests and updating Purchase Order details were very easy to accomplish.

While this method offered ease, it also came with an inherent security risk. Anyone familiar with the Maximo integration framework and its messaging structure could send inbound transactions to Maximo without having to identify themselves. The transaction would show as having been performed as the MXINTADM user in the system.

Luckily, the folks at IBM identified this risk and took steps to ship the product with this feature disabled by default in recent versions. Here is an IBM support article that identifies the security risk and outlines a mitigation procedure:

http://www-01.ibm.com/support/docview.wss?uid=swg21968191

Now, let’s walk through how to check that your environment has anonymous integration access disabled, and how to disable it if necessary.

Maximo System Property: mxe.int.allowdefaultlogin

In Maximo 7.6.0.3 IBM introduced a new System Property called mxe.int.allowdefaultlogin. This property is a boolean value that controls whether anonymous access is allowed. A property value of 1 will allow anonymous access, while a property value of 0 will not. If you are running Maximo 7.6.0.3 or higher and do not have this property, you can add it through the System Properties application.

Maximo EJB Deployment Descriptor

If your environment does not have the mxe.int.allowdefaultogin property, or is at a patch level less than 7.6.0.3, then you must modify the Maximo EJB Deployment Descriptor file to disable anonymous access. This file is located in the following directory (substitute the proper Maximo home directory for C:\IBM\SMP\maximo):

C:\IBM\SMP\maximo\applications\maximo\mboejb\ejbmodule\META-INF\ejb-jar.xml

C:\IBM\SMP\maximo\applications\maximo\mboejb\ejbmodule\META-INF\ejb-jar_notf.xml

Each of these files have 4 instances of the ALLOWDFLTLOGIN environment entry. These descriptors look like:

<env-entry>
  <env-entry-name>ALLOWDFLTLOGIN</env-entry-name>
  <env-entry-type>java.lang.String</env-entry-type>
  <env-entry-value>0</env-entry-value>
</env-entry>

Change the value of 1 to 0 in each of the 4 locations within each file. Then rebuild the MAXIMO.EAR file and re-deploy the file to WebSphere.

Making these changes will force any systems that are integrating with Maximo to supply proper authentication header credentials. In environments that utilize native Maximo security, these means supplying the MAXAUTH header with a Base64 encoded username:password combination.

If you have any trouble, or have questions on how this works, please leave feedback in the comments below. 

XSLT Data Transformation in Maximo

Are you in need of synchronizing to an external database with different column names and different data types? Do you have a requirement to integrate data from an external database that does not match your attributes?

XSLT may be just what you are looking for.

XSLT stands for extensible stylesheet language transformation, and it is backboned by XSL (Extensible Stylesheet Language). If you are new to this, you may say “what does that mean?” Think of XSL as a style sheet for XML, much like CSS is a style sheet for HTML. It is a set of order that allows you to form your XML. This can be something eye catching, such as displaying your XML with colors or fonts, or a full-fledged re-arranging of the way the XML data looked originally. Essentially, XSLT transforms XML from one document to another. It is created from different languages and will use them to retrieve the data you are requesting. Some parts of your document will rely on XPath to point to the data being retrieved. Some parts could use XQuery to query a database and bring back what you specifically need.

In this document, we’ll look at how XSLT is formed, how it retrieves data, how it re-formats data for transmission from one database to another, and how it is applied while the data is passing through a middleware system (e.g. WebSphere or Weblogic) into an external database from a Maximo database.

So, let’s get started. XML is the ultimate data transporting format. It is the backbone for the vast amount of data transmission that we do, and can be tweaked in many ways to pass data from one system to another. As mentioned above, knowing how to edit/write the additional languages (and apply them) will offer you boundless potential. In our scenario, we have an XML output from a Maximo database that has work order data.

This, unfortunately, is not going to work for our destination database, as it has different column names, attribute names, and data types that only allow so many characters to be in an attribute’s field. The destination database will render its XML output to look something like this:

At this point you may be thinking…”Whoa, how am I going to do anything with this?” WONUM is CUSTOMER. SITEID is LOCATION. PRODUCT is only the third section of my GLACCOUNT attribute, and why would you want two different descriptions with one of them obviously having been shortened? Enter XSLT.

With an XSLT document, we can re-construct the XML from what was originally produced by one database into a new “document” that relates the correct names and data that can be passed into the destination database. Let’s take a closer look at how this is done.

When setting up your document, of course you will need to make your opening declarations as done in the tag <xsl:stylesheet version=”1.0″ xmlns:xsl=http://www.w3.org/1999/XSL/Transform…………/>. The Transform section is indicating that this XSL sheet is actually an XSL(transformation). You can use transform instead of stylesheet at the beginning of the tag, but where the real work begins is at the <xsl:template match=”/”> tag. The element (match=”/”) is XPath and it points the XSLT file to the root of the results set that can be accessed.

You will also see that a match=”PublishMXWO_XSLT” as well as a few other match= elements are in place. Basically, the PublishMXWO_XSLT is our publish channel, which sends data out of a Maximo integration. We’ve got an XSLTSet (which is how Maximo can retrieve data in a dataset), and we’ve got WORKORDER, which is our actual object from the database. This lets the XSLT document know where we are retrieving our data from.

The next key element that is used is an XSL element that allows for the data to be “transformed” from one value to another. The XSLT element <xsl:value-of> is critical to extracting the value of a selected node from the source XML. Further down in your XSLT document, you will code for what you want to transform.

In this example of code, my matched value of SITEID is what we want to “transform” and, via the value-of element, we are telling the document to set it to LOCATION. Additionally, in XSLT, when using the select=”.” element, you will retrieve the current node. That node is LOCATION.

Further “drilling down” can be done via the <xsl: for-each/> element, however, we will only be transforming data in this document. More info on the for-each element can be studied here. There are many, many elements that can be used with XSL/XSLT that you may have a use for sometime down the line.

Now that you’ve seen how to transform an attribute, lets look at that tricky case of just needing the third section of the GL account; as that is all that is desired by the destination database.

Here, once our match has provided the dataset we want, we want to further “drill down” and pull back the number 2 section of the GL account as noticed in the select=”GLACCOUNT/GLCOMP[@glorder=’2′]”

In this code we simply want to transfer the third section of the GL Account. In Maximo, or more correctly the database, the GL Account sections are stored as section 1 (0 in the order), section 2 (1 in the order), and section 3 (2 in the order). We simply need to transfer the third (2) section in this integration. If we asked for the GLACCOUNT, the output would come back looking like this:

This is not the desired data, so the select=”GLACCOUNT/GLCOMP[@glorder=’2′]” statement will pull just the glorder 2.

We have one more tricky problem to deal with; our destination database has a description attribute that only allows 30 characters per field and a short description attribute that only allows 10 characters. We’ll set this on our match going into their database attributes.

Here we have the match element pulling description and our value-of element (going to our destination database) has attributes of DESCR and DESCRSHORT:

In our stylesheet, we are going to match the node, order, and length via the ., 1, and 30/10 substring values.

Once we have that done, we have our XSLT sheet set up and the complete document looks like this:

Now, we are done with our document and it needs to be added into your integration. Our Integration is outbound, so it is a Publish Channel, but an Enterprise Service can also be set up with an XSL transformation. In the XSL Map field, you will put the directory where your .xsl doc will exist (e.g. C:\TEMP\mycoolXSLTdoc.xsl).

A point of note for those who decide to use XSLT mapping in a multi-server (clustered) environment: if you have not built the XSL file into the EAR or cannot build the XSL into the EAR, you will need to put a copy of the XSL file into all of the servers that your integration can access in the directory path mentioned in the Publish Channel/External System.

Once the XSL Map is in place and the integration is turned on, Maximo will publish messages that get transformed into the external system format.

Go out there and give it a try, and the next time you’re in a meeting with one of your users and they say “how are we going to get the data from here to here”, you may just have the answer for them!

Feel free to leave any comments or questions below. For visual instruction of the previous steps, check out our video tutorial.

Querying Maximo using the REST and JSON APIs

In a recent exchange on the IBM developerWorks forum for Maximo, it was discussed how best to query Maximo through the Integration Framework that is configured for LDAP. Getting data from an LDAP-based Maximo is similar to an environment that is configured to use native authentication, with a few subtle differences. This article will demonstrate how to query for data using the REST API that is found in Maximo 7.5.0.3 and higher, and the JSON API that is found in Maximo 7.6.0.2 and higher.

The information detailed below is available in an IBM article (available as a PDF or as a web page) titled “Maximo NextGen REST API”. Based on questions I have received from our customers and other Maximo professionals, there seems to be some confusion on how to use this information.

There are many tools available that will facilitate communication to and from Maximo using the REST and JSON APIs. This article uses the Postman application which can be downloaded for free here.

Before we dive into the “How” we should discuss the “Why”. What are some reasons for wanting to use these Maximo APIs? Here are a few:

  • Creating new records
  • Modifying existing records
  • Querying Maximo data from an external application
  • Performing calculations on Maximo data such as averaging a cost, finding maximum cost or summing a budget from an external application
  • Creating lists based on Maximo data
  • Querying related records from an external application such as retrieving a list of work orders for a given asset
  • Deleting data
  • Replacing data

Querying Maximo Data with Native Authentication

Let’s start off with a simple scenario: show me all Person records that start with the string ‘CAR’. Note that we are using the Maximo demonstration data for these examples.

To start, we need two pieces of information to complete a successful API call:

  1. Maximo API URL
  2. Maximo Username and Password

However, just having the Maximo Username and Password is not good enough to make an API call. We’ll need to encode those credentials before calling the API.

The credentials will need to be Base64 encoded using the username:password format. For example, if your credentials were maxadmin:maxadmin, your Base64 encoded credentials would be bWF4YWRtaW46bWF4YWRtaW4=. You can use this online utility to perform a successful encode. Place your username:password in the first box and click the > ENCODE < button. Your Base64 encoded string will appear in the box below.

Now we can fire up our Postman application to perform the calls to Maximo.

JSON API

For the JSON API, our Maximo URL will look like this:

http://maximo_host/maximo/oslc/os/mxperson?oslc.where=personid="CAR%"

Substitute your Maximo host name for the maximo_host, and include a port number if necessary.

Next, we’ll need to supply the credentials as an HTTP Header. The header key for Native Maximo Authentication is maxauth. The header value will be your Base64 encoded credentials. Finally, we’ll use the HTTP Method of GET. It comes together like this:

GET http://maximo_host/maximo/oslc/os/mxperson?oslc.where=personid="CAR%"
maxauth: bWF4YWRtaW46bWF4YWRtaW4=

In Postman:

Maximo will perform a wildcard search so all person records that contain the string CAR will be returned. In the case of the sample or demonstration data, that is one record.

The result will be something similar to this:

{
  "member": [
    {
      "href": "http://maximo_host/maximo/oslc/os/mxperson/_Q0FSU09O"
    }
  ],
  "responseInfo": {
    "href": "http://maximo_host/maximo/oslc/os/mxperson?lean=1&oslc.where=personid=%22CAR%25%22"
  },
  "href": "http://maximo_host/maximo/oslc/os/mxperson"
}

When using the JSON API you will get a URI back. This URI is especially useful as you can use this URL to query, perform updates, or perform a delete action on a specific record. In this case you can take the URL and place that back into Postman to perform a GET action. You will receive all of the pertinent record information for that specific record.

http://maximo_host/maximo/oslc/os/mxperson/_Q0FSU09O

REST API

For the REST API, our Maximo URL will look slightly different; however, the concept of authentication remains the same. Substitute your Maximo host name for the maximo_host, and include a port number if necessary:

http://maximo_host/maxrest/rest/os/mxperson?personid=CAR%

It comes together like this:

GET http://maximo_host/maxrest/rest/os/mxperson?personid=CAR%
maxauth: bWF4YWRtaW46bWF4YWRtaW4=

In Postman:

The results are:

{
  "QueryMXPERSONResponse": {
    "rsStart": 0,
    "rsCount": 1,
    "MXPERSONSet": {
      "PERSON": [
        {
          "rowstamp": "[0 0 0 0 0 106 -81 -98]",
          "ACCEPTINGWFMAIL": true,
          "ADDRESSLINE1": "62 Winthrop Street",
          "BIRTHDATE": "1962-08-22T00:00:00-04:00",
          "CITY": "Medford",
          "COUNTRY": "US",
          "DISPLAYNAME": "Tara Carson",
          "FIRSTNAME": "Tara",
          "HIREDATE": "1997-08-01T00:00:00-04:00",
          "LASTEVALDATE": "2001-08-01T00:00:00-04:00",
          "LASTNAME": "Carson",
          "LOCTOSERVREQ": true,
          "NEXTEVALDATE": "2002-08-01T00:00:00-04:00",
          "PERSONID": "CARSON",
          "PERSONUID": 21,
          "POSTALCODE": "02155",
          "PRIMARYEMAIL": "tara.carson@helwig.com",
          "PRIMARYPHONE": "781-555-6247",
          "STATEPROVINCE": "MA",
          "STATUS": "ACTIVE",
          "STATUSDATE": "2003-09-25T15:44:38-04:00",
          "STATUSIFACE": false,
          "TRANSEMAILELECTION": "NEVER",
          "WFMAILELECTION": "PROCESS",
          "PHONE": [
            {
              "rowstamp": "[0 0 0 0 0 96 57 -106]",
              "ISPRIMARY": true,
              "PHONEID": 145855,
              "PHONENUM": "781-555-6247",
              "TYPE": "WORK"
            }
          ],
          "EMAIL": [
            {
              "rowstamp": "[0 0 0 0 0 93 114 125]",
              "EMAILADDRESS": "tara.carson@helwig.com",
              "EMAILID": 145855,
              "ISPRIMARY": true,
              "TYPE": "WORK"
            }
          ]
        }
      ]
    }
  }
}

To get information about a single record in the REST API, you need to reference the Unique ID of the record. In this example, since we are referencing the PERSON object, the PERSONUID attribute is our Unique ID. To query for a specific Person record, include the PERSONUID in the URL:

http://maximo_host/maxrest/rest/os/mxperson/88?_format=json&_compact=1

Querying Maximo Data with LDAP Authentication

If your Maximo instance is configured to use LDAP Authentication, you must use a slightly different HTTP Header to supply the appropriate Maximo credentials. The other information, such as the URL and HTTP Method, remain the same.

In place of MAXAUTH, Authorization is used as the property in the HTTP Header. The same Base64 encoded username:password combination created earlier is also used for the value, however, this time it is preceded with the word Basic and a space character.

It comes together like this using the JSON API:

GET http://maximo_host/maximo/oslc/os/mxperson?oslc.where=personid="CAR%"
Authorization: Basic bWF4YWRtaW46bWF4YWRtaW4=

In Postman:

Similarly, for the REST API:

GET http://maximo_host/maxrest/rest/os/mxperson?personid=CAR%
Authorization: Basic bWF4YWRtaW46bWF4YWRtaW4=

Maintaining Your Session

Once you establish a successful connection to Maximo, you will receive a cookie back in the response with a JSESSIONID token.

It is recommended to use this token in subsequent requests back to the Maximo API. This will improve performance and limit the number of sessions that Maximo creates.

Please feel free to leave any comments or questions below. For visual instruction of the previous steps, check out our video tutorial.