“Welcome, everybody, to the Elevate community session. Today, we’re gonna be talking a little bit about the logic monitor API.
More simply put, how we can leverage it for automation to integrate and simplify, not just the management of logic monitor, but also any sort of, integrations or workloads that you want to incorporate.
So we’re gonna go ahead and get started. Today, we’re gonna talk a little bit about the API ecosystem. We’ll go over, really at a high level, some of the architecture authentication methods, what SDKs and other sorts of resources you have available to you, and then we’re gonna watch a few, demonstrations of some practical examples.
So, some common use cases from customers around resource management, how we can enhance some of the integrations and workflows you guys have, and then some automation ideas that you might be able to put into practice around providing additional layers of context, to some of the, you know, alerts and things that you, have going on within your environment.
Then we’re gonna also go into some of the documentation. So how do we actually get started with LogicMonitor’s API? So from your first API call to generating credentials.
And then lastly, we’re gonna go over some best practice information.
So that’s gonna be around, rate limiting and some security best practices around integrations.
And then we’ll give you really more, some takeaway information that you can use, to further enhance your value out of logic monitor.
So when we take a look at the logic monitor API, it’s gonna be basically broken down into two authentication methods. So we have our, I guess, we’ll call it legacy, but it’s it’s not really legacy. Our l m v one authentication method, and then we have the more traditional auth method for bearer token.
When we think about logic monitor, there’s been a number of iterations of the API over the years. We are currently at version three, which is the the latest version for the customer facing API, and that supports either authentication method.
When we take a look at, how you would access the API, one of the things that you’ll know is they all share kind of a a base URL of your portal, forward slash Santava forward slash rest. So there’s no version in the actual, URL. The versions is actually dictated by the presence of a x version header, or in your query, you can specify an additional parameter, v equals and then obviously the the value of the API version you wanna call. So a little bit different than some of the other stuff you might see out there, but just important that you always wanna be using that latest version v three, as that’s gonna have all of the patch methods and other things that you might want to take advantage of.
So by default, if you haven’t been using logic monitor’s API, you wanna start with v three as that’s gonna give you the the latest set of features and functionality.
So when we look at to, you know, SDKs and the different modules that are available, there’s really three flavors, and it’s really gonna depend on, your environment and kinda what you’re comfortable developing in. When we think about, a majority of our customers, the Python SDK is really kind of the most feature rich in terms of supporting other workflows and other integrations just because Python has a ton of existing libraries you can tap into. But we also have a Go SDK, which is really great, for high performance applications that need to leverage the API. And then what we’re gonna be focusing on today, this being obviously the community conference, is gonna be the PowerShell module. So for those that don’t know, I’m Steve Velardi. I should have introduced myself back in that first slide. But I am one of the principal sales engineers here at logic monitor, and I am the main contributor and developer for that PowerShell module.
What’s nice about this is unlike the SDKs, which are, you know, built by logic monitor, maintained by logic monitor, the PowerShell module is fully community developed. So, myself and other customers have put a lot of effort into making it as consumable and easy to, set up and use as we can. So we’re gonna use this one as an example. But as, we go through some of the demos and videos today, all of this stuff is applicable to the SDKs as well. So just keep that in mind, as we go through things.
So when we look at common use cases, a lot of the scenarios and integrations, that we see out there kinda fall into these five different buckets.
First is gonna be around onboarding. So, right, having a a standard operating procedure for how you onboard either customers or maybe new, sites. Right? You’re gonna wanna standardize how you onboard devices, device groups, alert rules, things like that.
And then you also wanna, you know, consider the fact that once they’re onboarded, how do we maintain that, inventory? Right? So we’re gonna look at some in, some use cases around life cycle management. So that’s gonna be how do we remove devices, clean up things when they’re decommissioned, how do we potentially add some metadata and properties to things to help us keep tabs of what’s in production.
Right? Things like that. And then we’re gonna look into more of the process integration. So can we look at how you’re doing, change requests and potentially, add an additional workflow to that to improve, context and logic monitor?
Or maybe we wanna automate some additional things. Right? So onboarding, automation. We wanna, you know, do some alert processing or maybe even context enrichment.
So we’ll look at things like operation notes in the context of logic monitor and how those might be, you know, a good use case for incorporating the logic monitor API into, some existing workflows that you might have.
So first thing I wanna go, through is how to actually connect to logic monitor, for the first time. Right? So we’re gonna go through a quick video on how to set up logic monitor, how to set up the PowerShell module, and how to first extract data. And we’ll go ahead and, watch that real quick.
Alright, everybody. We’re gonna go ahead and start off with connecting to the logic monitor API for the first time. In this example, I’m gonna use the PowerShell module. You can use the SDKs.
You can use Postman. Right? But the idea here is that all of the documentation, for the SDKs are gonna be available on our support site that will walk through authentication, some examples, and then as well as the PowerShell module, there is a docs site that’s available in the GitHub repository that will also walk you through how to use the, the feature as well. So in our case, I’m gonna be using my l m, portal, and I already have created, an API token or bear token, whatever whatever you’re gonna use, that has the prop appropriate permissions.
Right? So being able to manage or, write to certain parts of the the platform.
Once you have those items, we can go ahead and go back to our, Versus Code here. First thing I’m gonna do is actually go and run this install module.
So because I already have it installed, I’m just gonna hit no. I just wanna kinda show you that’s the process to go through. But once you’ve installed the module, you have two methods that you can use for authentication.
You can cache a a credential. What that does is it uses the Microsoft secret management, suite to securely store, a cached version of your account credential. That makes it easy if you’re gonna be moving between, you know, a prod and sandbox account, and you wanna just be able to quickly flip between the two. Or you could just use the connect, l m account command and specify your bearer token or your l m v one token, upon, you know, connection.
So we’re gonna go ahead and create these two. So we’ll go ahead and run that, and you’ll see that we already have our or now we have our, new tokens that we can leverage. And in order to connect with one of those accounts, we can basically run the connect l m account command and then reference, the name that we gave that particular, credential. So if I go ahead and connect in this case, you can see we’re connected with our LMV one. Whereas if I wanna go ahead and connect with our bearer token, right, pretty much the same process, but now we’re connected by a bearer token in this case.
So once you’re connected to a, particular portal, right, you’re gonna want it to test connection, try to see if you can pull some data down, things like that. So just as a a quick example here while we get started, I’m gonna run, the command to get our devices, and I’m gonna use a glob expression, for the display name to basically get anything that starts with l m dash, coal. Right? In this case, this should be just be my collector. So we’ll go ahead and run that. And what we get back is our, collector, right, with our nice, welcome, elevate attendees description.
So now we have a setup connection. We can actually start to dive into some of the other aspects of the portal. Right? Automating some things.
Right? Some of the other stuff that we want to take a leverage at. And I can also, right, remove the cached accounts if I need to. Or, right, I can also remove the API tokens as well.
I’m gonna leave those in here for, for this moment. But once you’re all said and done, right, you just wanna disconnect and clear out your session. You can just run that disconnect l m account, and now I can no longer whoops.
I can no longer run, this command. Right? It’s gonna tell me I have to be logged in first. That’s, like, very quick and simple. Within five minutes, you can be up and running, you know, and starting to extract data programmatically from logic monitor. So very easy, and very quick to get started. And We’ll continue on with the next demo where we look at some more examples of how we can leverage this in more of a practical scenario.
So now that we’ve seen how to connect to logic monitor and how to start to extract data, we wanna start to actually use that data, and be able to refine the processes that we use to pull certain bits of information from from logic monitor. So what we wanna look at now is some use cases around filtering. Right? So, specifically, how do we target and extract things from logic monitor that we’re looking for, without having to pull the entire, portal down to do some, you know, automation activity.
So we’ll go ahead and take a look at that right now. Alright. So now that we got our setup running, we’re able to pull data from logic monitor. The next thing is how do we take that, further.
Right? So the most part, when it comes to getting the required data out of logic monitor, you know, when it when it’s just simple properties like display name and things like that, the filtering is fairly straightforward. Right? We we kinda saw that before.
If we go and, you know, reconnect back to our account, and we go ahead and grab our, device, and then we look at, you know, the object itself. Right?
Filtering on things like the display name and some of these properties, fairly straightforward, because they’re just a single text field. Where it gets a little bit more difficult, and this is usually where folks want to, you know, target when it comes to filtering for certain types of data is they wanna leverage the properties they’ve invested in stamping on their resources. Right? Whether it’s things that are set at a, you know, device or device group level or things that are picked up through active discovery, things like that.
The way that that’s stored in logic monitor is it actually comes back as, a hash table of all your key value pairs for your different properties.
So to extract that through a filter can be somewhat difficult.
So in logic monitor, we have the concept for certain endpoints, where you can perform what’s called an advanced filter, specifically for things like those properties that I mentioned.
The way you structure them can be sort of confusing to some folks. So what we’ve done, especially in the PowerShell module, is what we’ll go ahead and clear out here is we’ve built some guided instructions on how to construct them for you. So the concept is, in this case, if I wanted to search a custom property or anywhere that the site property contains the string Ohio asterisk, Right? Here is how you would go about and build that. So if we go and run the build l m filter command, it’s gonna go and walk us through what it is that we actually wanna filter for. We already kinda looked at some basic filtering, so we’re gonna go with the advanced filtering. And what I’m looking for is a custom property.
Now, obviously, we have different operators we can look for, but I just wanna find where it contains a certain value. So I’m gonna say contains. And then the property that I’m looking for is a custom prop that we just called site. Right? And the value is just gonna be exactly what we wanna search for, Ohio asterisk.
Now, obviously, I can tag on additional and or combinations.
So if I wanna have site equals Ohio and host status, right, is equal to alive or dead. Right? Maybe I’m doing some cleanup.
That’s a quick way to kind of build this filter out. I don’t have any other conditions, so I’m gonna go ahead and hit no. And what that does is it constructs the filter for me, without me having to figure out what needs to be escaped and things like that. So I can actually just go and call that particular, filter. So I can do that fairly straightforward. So if I just wanna go ahead and run, the get l m device command, I can just pass it that filter variable that it stored for me.
And what I get back is everything that, ultimately has the, site stamped on it for Ohio.
So super simple. And even if we go back and we actually look at that command that we ran and we run it with the debug flag, right, what we’ll end up seeing after this all goes through, is that nicely formatted filter, all done and taken care of for you so you don’t actually have to construct anything. So that’s good if you’re trying to build a complex filter and you wanna validate that it’s going to be, you know, accepted by the logic monitor API. And then, obviously, you can then subsequently use that, in your, scripts or your automation. Right? You can just, store that as a string or actually just, you know, pass it to the filter property directly.
But that gives us a very quick way to build complex, filters.
So one of the other options that you have available to you is oftentimes when you’re trying to do, maintenance or things to a bunch of devices in the portal, you may only wanna target things that have changed. Right? So if I have five thousand devices in my portal, I may only wanna target the ones where, you know, I’ve recently updated them or the devices have been changed, or maybe I wanna compare it to the original list to know what has been modified versus what hasn’t been modified.
That’s where the delta flag comes in. So if we go back to here and we just search for, delta API, these are all the things that will link, for you. Delta is a way for us, to basically have a diff view of only returning the things have changed so I don’t have to return the entire device tree every time I wanna compare, or confirm something has actually been changed in logic monitor. The way you call that is gonna be for the use of this delta switch. I can use that same filter we had before, but the only difference I’m gonna go in here and actually clear this out a little bit. The only difference with this, if we go and we run it, is when this returns, what we’ll notice just like in the, build filter, right, it’s gonna return a delta ID for us. This is an ID that we can then reference to always see what is the changes that have occurred to this particular query since the last time that we ran it.
So in this case, what I’m gonna do is I’m gonna get that same set of devices, and I’m gonna select the first five of them. And then I’m just gonna update their description to updated, which is already technically set here, but we’re gonna update them. So we’re gonna go actually, what we’ll say, you know, make a little different. Updated.
Okay. There we go. How about that? So we’ll go ahead and we’ll run this, and what we’ll get back is our five devices that we’ve just updated.
Now if I was running that original query to see what has changed, right, obviously, you’re not gonna quickly run these right side by side with each other, but you might have other automation and things happening. I can just run that GetLM device with my Delta flag. And what I should get back is just the things that have changed. Now if I were to run that again, I’m gonna get back nothing because nothing has changed since the last time I checked that, that delta ID. So that’s a very easy way for you to make your scripts more performant by not having to pull down everything if you just wanna validate stuff has changed and was properly set as you expect it would be.
So I’ll I’ll link to that delta article at the end of the slide deck. But, again, right once we’re done, we’re gonna go ahead and just fill our session. And now we can continue on to using this filter data to actually do things like maintenance mode and onboarding and other stuff like that. So we’ll go ahead and continue on with our next demo.
Alright. So we’ve connected the logic monitor. We’re able to start to extract and use things like the delta switch to be able to, you know, get a diff view of things that have changed. But now we have the basic building blocks in order to play with and massage the data that we want.
Now we wanna actually start to apply those principles and start to incorporate some automation activities. So what we’re gonna look at now is how we can use, bulk import, for setting up, device groups, onboarding sites and services, all sorts of things that we can then use, to put into practice for how when we have an m and a activity or we acquire a new customer as an MSP, we can start to standardize on this onboarding practice. So we’ll go ahead and take a look at that now. So now that we have an idea of how to filter things, find what we’re looking for, the next comes how do we actually use this module to help us, onboard and do some life cycle management.
Right? So whether you just acquired a new company, and you need to bulk import their devices in the logic monitor or, you know, maybe someone provided you with an inventory spreadsheet of things that aren’t in monitoring today that you need to now onboard.
These are some examples of how you might achieve that. Right? Or even if it’s just restructuring your folder tree to better align to how you have things laid out in CMDB.
Right? Whatever the case might be, these are gonna kinda give you some things to think about in terms of how you might want to approach, onboarding. Right? So in my examples here, what you’ll notice, these are not part of the, the modules themselves. Right? In this case, this is just a custom function.
But to give you an idea, all of the things that we’re gonna be doing here are examples that come from the documentation.
So if I go to the code snippet library, you’ll see actually importing devices from CSV.
That whole function actually just comes right from here. Same thing with the one that we’re gonna do for device groups. So like I said, a lot of good, practical examples in this document, to reference and to to utilize. So feel free to, you know, peruse that and provide feedback on it as well. But for our purposes, we’re gonna reference these two, commands here or functions.
Back in our, terminal here, first thing we’re gonna do is we’re actually going to, import, a CSV.
So if we take a look at, this particular file, I’m gonna go ahead and run that. And then we actually just wanna see what’s in it. Right? Basically, I’m actually gonna do a little different.
We’re gonna just format this as a list so we can get a little bit better. I’m sorry. Table. It’s just a CSV of IPs, display name, a host group that is the full path, to where I want these things located, and then some information around, properties, a description, and collector assignments.
Right? So just a nicely formatted list of the things I wanna onboard. But one of the things you’ll note in my LogicMonitor portal, we go back here and we come in, is I don’t necessarily have that folder structure anywhere in here. So one of the things that is nice about using the API is we can programmatically take care of a lot of those, items for you as part of that process.
So if we go back here and we actually run our, our import, I’m gonna go ahead and shift run that.
Oh, that would help if I imported the module that I have this in.
Let me clear that out.
So if I go ahead and run this command, right, what it’s gonna do is gonna run that function, and that function actually is gonna process the groups to see if they exist. And if not, it’s gonna walk through that nesting and provision any groups that it needs to along the way. Now that becomes super helpful because I don’t have to ensure that those groups exist ahead of time. And I can lay out my folder structure however I want in, you know, Excel and then just come and drop it in here and have it create it for me. So it becomes a very easy way to do things like folder restructuring, especially if you’re gonna do some things like, dynamic grouping perhaps.
We’ll let this run. And while this is completing, well, we can go back to our portal real quick. And if we refresh it, right, what we’re gonna end up getting is now we have that new folder. It’s starting to provision all of the subfolders for us along with all of the other resources.
So our resource has our, our owner, our service type, our application team, criticality. Right? All these things that we wanted to put in here so we can use them maybe for, opening up tickets in the right, ownership or team. Right? As well as now having everything nicely laid out so I can now assign RBAC based on, you know, who needs access to, certain types of resources.
But that’s really helpful. It’s kind of an easy way to onboard a bulk set of devices, with minimal effort. Right? Really, all of your effort is in just making, the the bulk of what it is that you actually wanna import in a CSV file.
Now same thing comes with groups. So say I already have all of these devices in logic monitor. Right? And now I want to actually take them, and maybe restructure them a little.
Right? Maybe I wanna make some dynamic groups, things like that. So if we go ahead and we look at, importing this group, and I’m actually going to format table this as well.
Right? What we now have here is I have some groups and their paths, as well as descriptions, but I also have applies to logic for a lot of these that reference some of those things that we just created.
So with that I’ll have that, and we’ll do appear. Right. If we go ahead and run that, I’m gonna clear this out real quick.
What we end up getting is a similar process. Gonna go through, figure out if it needs to create any of those parent groups, if they don’t exist, and then it’s gonna create those main, dynamic groups based on this layout that we’ve constructed.
So So we’re gonna go ahead and let that complete for a second. So this is really helpful. I’ve I’ve found this particular one super helpful for folks that might have a really old portal that has kinda been neglected for a while, where you wanna just freshen up how the folder structure looks, reassign things. Because remember, right, devices in logic monitor can exist in more than one spot. This becomes a a nice way to maybe export what you have in logic monitor and then realign them, based on, you know, whatever the new standard or or convention you wanna follow for grouping them.
So if we go back into here, we’re gonna minimize this one since it’s all finished. And now what we can go ahead and do is look at our dynamic groups that we just created, which have our different, base levels, critical. Right? Critical systems, high priority, low priority, right, that have everything that’s dev or whatever the case might be.
We can also do it by environment, which is great. So we can see all the stuff here. Right? API facing, production services, customer facing, internal, all that kind of set up for us out of the box without really having to do anything in logic monitor.
Now all that stuff, we can certainly take and leverage. So I can come in here now and immediately start to maybe plug this into, logic monitor, and we can group them by, environment and maybe, application team. Here you go. Right?
So now I have a very easy way to see what’s out there. I just onboarded it, and I’m quickly able to, have a nice resource explorer view of that entire environment.
So took me all of five minutes to do, but it it’s a very, impactful way to get value out of how do we quickly onboard, off board, etcetera, by just leveraging a CSV file and a little bit of code, not a ton, but a little bit of code to to help kinda expedite that process.
So now we’re gonna look at taking those devices and how do we do maintenance on them. Right? How do we add ops notes and different parts of context to our environment?
So we’ll go ahead and take a look at that now. Alright. So we just saw how we can go ahead and, you know, automate the process of onboarding in the logic monitor.
So if we take that a step further, we have all these now, these nice new devices, these device groups, all the structure laid out that we need to start to monitor and get visibility into things.
But, you know, we’re gonna have to handle the life cycle process now. So if we have things that are gonna go down for patching or maintenance windows or maybe we’re having a power supply replaced on a server, We wanna put that device in the maintenance mode so that we don’t send out a bunch of alerts.
Those are processes that we can look to, automate. So instead of having to have your, logic monitor admin go into the portal and, you know, schedule downtime, we can look at different ways that we can automate that process, whether that’s patch cycles or integrating with maybe your CMDB to automate that process. So we’ll take a look at some examples now and see how we can potentially, improve that process. Alright.
So now that we have our devices onboarded, we’ve got our resource groups the way that we like, we can start looking at how we might want to start doing some other interactions with logic monitor. Maybe not extracting data, but putting context and other things in place so we can, you know, suppress alerts, we can add context, things like that. First one’s gonna be obviously super easy, which is setting, you know, maintenance windows or scheduled downtime for resources. Now I have four different examples here.
All essentially are creating SDTs, but they do it at various levels within, logic monitor. Right? So if we I’m gonna go ahead and create these pretty quickly. If we look at this first one, right, it’s just setting an emergency maintenance window, for our web server or the first web server in our group.
This is great if you wanted to tack this into an automation process for, say, anytime a change request is approved, that’s an emergency change request. Right? Make sure we put an SDT note in there. Other ones, right, maybe we’re targeting a group of servers. In this case, our database team, server, our server group. We wanna have a, weekly maintenance that happens the first Sunday, for it looks like three hours.
Great. Right? This would be a way for us to start automating, setting up all of the different maintenance windows as we onboard, you know, maybe new set new sites, new servers, etcetera.
Or maybe you’re doing more of a production scale maintenance where you’re trying to, coordinate changes. Right? And they all tie back to a particular change request. So we can go ahead and put that one in as well. And we’ll look at these all in a second. The last one is really more of a full kind of automation script where I have either a CSV. In this case, I’m just making a CSV or converting from a comma separated text, array.
A set of devices with their, maintenance date, which is just today, and then how many hours and what the reason for patching is. Right? So this way, if I wanted to run this for some sort of script or maintenance task I was performing against a group of, you know, specific servers and they have different durations and things like that, I can automate that through a simple, you know, for each loop. So we’ll go ahead and run this one.
And now that these are all done, we see they’re all scheduled. If we go back and look at logic monitor, we give this a refresh.
Right? Now what we see, we have our current SDT that we set for our web one. We have our group SDT that’s set for the first, the first Sunday of every month from one to four for our database team. We have our production, group set for six hours, and then we had our CSV import that set resource level SDTs for various durations that also included the notes about what actually, was happening and the change request for our production, SDT window.
All of this stuff can be managed and set up automatically as well as when things are done with and you wanna actually maybe clean up some existing ones, I’ll clean up the ones I just created. I can go through and purge those out of logic monitor. And in this case, now I should have a nice clean portal, no SDTs coming up. So very quickly without a lot of code, we can quickly put together these automations to help us, enhance some of the additional areas that we operate within.
So whether it’s, you know, putting in maintenance mode for approved the change, whether it’s setting up or changing or modifying your recurring maintenance windows, All of that stuff can be automated so somebody doesn’t have to have to actually do it within the portal UI itself. So the last thing we’re gonna look at is taking this part a little bit further by providing additional context for people that might be troubleshooting or looking at particular issues within logic monitor. Alright. So, hopefully, that gives you a little bit of an idea of how we can start to tack on API calls into some existing workflows to help automate the setup and maintenance of logic monitor.
Now the the next one here, this is our last video, but what I wanna start to to get out of this, is really take away the idea of context enrichment.
So when we think about operation notes and logic monitor and those who aren’t familiar with operation notes, they’re basically time series markers that you can tag data or metrics to, so that when people are looking at information so if I’m looking at a switch or I’m looking at a server and I’m looking at its CPU graph.
Right? If I see a spike that goes up to a hundred percent, right, I wanna know what caused that. But maybe it’s not an an you know, it’s not a service issue. It’s not a, application issue. Maybe it’s something that happened in the environment. Maybe somebody pushed out an update or somebody made a change on an upstream device, and that’s ultimately had an effect that’s going to impact that service or that server. So operation notes allow you to connect that context that somebody did something somewhere else to that time series data so we can have that, you know, single pane of glass and that view into some of the more, you know, nonmetric specific things that might affect the availability of services and resources.
We’re gonna go ahead and take a look at how we can potentially integrate ops notes into some existing workflows that you can take advantage of. So for our last demo, I wanna take a few minutes just to talk about ops notes. Right? This is another, super helpful, feature within logic monitor that you can use to help, add additional context to your, environmental changes.
Right? Whether that’s, new production build releases, whether that’s maintenance tasks, whether that’s, somebody that’s just investigating an issue maybe and wants to be able to correlate, a spike in CPU to some other part of the environment, they become a really easy way to connect the rest of your org around what is actually going on within the environment. So I’m not gonna run through all of these examples in here. I’m just gonna run through this last one, which basically takes a set of servers, and puts a new release build, out as an ops node.
So this would be something that you would probably do as part of your maybe your, release pipeline. Right? And if we go back to logic monitor, I’m looking at one of those front end web servers. And if we go and we show the operations notes, right, what we see here is that actual, deployment along with the tags and version numbers and all that stuff, but it’s also now referenceable on my time series data.
So if I was somebody that was not part of that deployment process or knew that it was deployed and all of a sudden I’m having issues with this web service or maybe some downstream effect as a result of it, I would be able to see that reflected in my time series data and be able to easily correlate that we’re having issues right around the time that that new release went live. Those are really helpful to incorporate in many aspects, whether you’re, you know, moving stuff from your sandbox portal into, you know, production or you’re doing code releases or you’re doing patching. All those things are super helpful to have as references within your logic monitor portal.
Alright. So I know you guys have are are full of ideas now on ways that you can leverage the logic monitor API.
But some of the things that we wanna keep in mind. Right? So, obviously, with the API comes some best practice information.
If you’re using the SDKs, right, which you don’t have to, you can, you know, write in whatever language you’d like and make the rest calls, and I’ll I’ll give you, you know, information on where you can get the swagger guide and documentation for. But if you’re not rolling, you know, with the SDKs and you’re kinda rolling your own code, some things to keep in mind around rate limiting. So with logic monitor, every endpoint is going to have a number of, requests you can make per minute. So you’ll wanna make sure that if you’re gonna be calling things in automation, just keep in mind that there is going to be a rate limit, and you can get the rate limit on any endpoint, through our support doc, or you can just look at the headers that come back from a request. It will tell you, you know, what your, balance is for, request on that particular endpoint. The other thing is gonna be around pagination.
So, if you have large amounts of data, you’re not gonna be able to pull it all in one, one shot. So use pagination, and and the capabilities in the API to be able to recursively get all of the data that you need and more importantly, around security. So, obviously, these API keys can pretty much do anything that a user can do in the portal. So you wanna limit permissions to what is actually required for that automation, Periodically review what you have out there, rotate them if needed.
And more importantly, if you’re going to be doing, some custom integration work, keeping performance in mind. Right? So use the the Delta API that we looked at, to cache results, batch operations so that you are doing, you know, not, sequentially changing things, but try to change them in bulk. But really just keep an eye on, you know, how you can best streamline the API so that things run quickly.
Right? You don’t run-in the rate limiting because you’re making a bunch of requests back to back, things like that. So some things to take away from this, before we kind of wrap things up, There’s a QR code here if you wanna get a link to that, but these will all be shared out in the slide deck. Links to our official API guide that has all of the endpoints that you can talk to.
There’s some really good overview guides on how to use the API in addition to the PowerShell documentation that we took a look at in the the video before as well as, if you’re looking to contribute to that PowerShell module. Like I said, all of the good stuff that you guys saw today, is as a result of collaboration between us and the customers.
So if you’re looking for your next project to contribute on and you wanna help improve or enhance, check out the repo to go ahead and get started and and kinda start contributing to it.
Other than that, thank you guys for your time today. And if you have any questions, you can connect with me on LinkedIn, and we can follow-up.”