Onboarding is an in-depth process that sets the stage for relationships with Managed Service Providers (MSPs) and their customers. Perfecting the onboarding experience will give customers the confidence that choosing an MSP to manage their IT was, and is the best choice. A poor first impression can damage credibility, so it’s vital to have a solid plan in place.
Everything, from the time the contracts are signed to the time the services go live, falls under the onboarding process. Onboarding a new customer is exciting, however, it is also quite challenging. You can’t take shortcuts. Onboarding can take a couple of hours, up to days, weeks, or even months. On average, an MSP can spend forty to eighty hours entering information manually from existing systems and databases to onboard a new customer.
Every customer is unique and has specific needs and challenges, so there is not just one precise process for onboarding. However, a systematic approach can save a lot of time and trouble, plus prevent many costly errors.
1. Make a Checklist to Standardize the Onboarding Process
Gather Details About the Customer
Since every customer is unique, begin creating an MSP onboarding checklist by eliminating the unknowns. Gather basic company details, as well as the technical infrastructure they are using. Assess the information to understand what the customer needs. This information will be very useful when drafting the Service Level Agreement (SLA).
Create the Service Level Agreement
Managed services are dynamic and a standard contract can’t be used with each customer. Instead, compose a unique, comprehensive service level agreement for each customer based on their specific needs. However, it is not a good idea to draft the entire contract on your own. It is best to hire a legal professional.
Acquaint the Customer With the Team
Once the service level agreement is signed, the new customer is onboard. Now is a good time to make the customer feel welcome. Some team members will work directly with the customer, so developing a bond from the start will benefit both the team and the customer in the future.
Embedding Networks
Embed the customer’s existing network infrastructure with the MSP system. This way, a framework can be set up to make the most of managed services, as well as monitor the customer’s network.
Implement Remote Monitoring and Management
At this stage of the onboarding process, implement remote monitoring software and the systems needed for remote services. Remote monitoring enables offsite access to the customer’s network and infrastructure, which allows MSPs to service the system without a physical visit to the site.
Train Employees
Customers will likely already have an in-house team who have experience with the company’s infrastructure. However, MSPs should familiarize the customers with any new tools or procedures.
Launch the Services
Once everything is set up to the customer’s satisfaction, it is time to launch and go live. Remind the customer that the terms of the contract are now applicable.
2. Find Check-In Frequency
Ask new customers about a preferred check-in style and frequency. After the onboarding process is complete, the relationship is fresh so it’s important to avoid misunderstandings or misaligned expectations. To accomplish this, complete periodic process reviews – maybe monthly for at least the first few months of the contract. This review provides the opportunity to identify concerns and prevent problems.
3. Personalize
Each customer is unique and has unique needs, so it is important to figure out what the customer is most interested in. A personalized experience is key to creating a more successful, engaging working relationship. Understand a customer’s goals and create a system that will help them achieve these goals.
4. Automate
Once check-ins and personalization are established, IT automation can be used during many of the steps in the onboarding process to decrease costs and cut out human errors. Some of the most common and best items to automate include:
Training
If your internal processes include a training portion of your platform, training modules can often be automated.
Diagnostics
Automating initial and regularly-scheduled diagnostics saves time and energy.
Preventative Maintenance
Firmware updates, security updates, and similar maintenance can all be automated for best efficiency.
Cloud
Automation greatly increases speed surrounding any cloud integrations, updates, and additions to cloud environments.
Routines
Routines like billing, software deployment, and reporting can all be automated to go out based on the personalization, check-in frequency, and SLAs set with the customer.
See how global MSP Sogeti used automation to reduce customer onboarding time by 90%.
5. Create Role-Based Access Controls
Instead of allowing everyone on the network access to sensitive information, provide only enough access to allow the MSP team and the customer’s team to perform their jobs. This increases security and helps to streamline each employee’s role and responsibilities for maximum efficiency.
It’s also important to implement an identity and access management (IAM) framework. This is a policy and technology outline that helps to ensure only authorized people can access certain information. As an MSP, the use of an IAM framework helps prevent customers’ sensitive data from being exposed to potential vulnerabilities. In some cases, this can mean some of the MSPs access is also restricted.
Since an MSP has access to a significant amount of sensitive data, this makes it vital for the customer to be able to trust the MSP to protect data from cybersecurity risks.
6. Address Sales Concerns Immediately
Understandably, new customers have concerns about signing with a new MSP. These concerns need to be addressed immediately to prevent future complications. For example, if a customer was concerned during the sales process about authentication, make a point of addressing these concerns and provide proof that the authentication process fulfills their needs.
An MSP must think out-of-the-box and create solutions that bridge various gaps, provide a wide range of services, and develop connectivity offerings. This strategic thinking sets MSPs apart from the competition and improves their reputation in the industry.
Customers may be concerned about the cost of services. This can be a tricky point because MSPs want to offer the best price possible without shooting themselves in the foot. Find a balance between the value of the service and a price the customer can afford.
Many companies are afraid of leaving legacy infrastructure and systems. No matter how much technology can improve the way a company operates, some people are afraid to upgrade their hardware and applications, especially when budgets are tight. However, even though legacy systems are prone to vulnerabilities and security concerns, customers may still be hesitant to upgrade when they think the old system is “perfectly fine.” Address this concern by explaining the specific security vulnerabilities and potential threats they could face with the old system. Offer services and security measures in addition to regulatory compliance assurances.
7. Offer a Transparent Service Level Agreement
A service level agreement (SLA) is a binding contract between an MSP and a customer that identifies both the services required and the expected level of service. This agreement should be customized for each customer to ensure the customer’s needs are met, while still keeping the MSP protected. The contract should be clear, concise, and transparent. Set high standards for yourself, be fair, but don’t let the customer take advantage of you.
8. Help Desk Support
The help desk support team should go above and beyond the call of duty. Help desk support specialists are often the first point of contact when a customer has a problem or issue. They are the go-to people for technical assistance and support related to computer systems, hardware, and software. They must handle situations in a timely and professional manner. The help desk support team is typically responsible for the onboarding of new users.
9. Use Multiple Passwords
Create a simple guide to help customers deal with any problems they may have remembering passwords or keeping passwords safe. Password security is vital to protecting sensitive data.
10. Templatize and Monetize!
Templates can be used with virtual images to create systematic templatized software environments. With virtual images, you can templatize the configuration of software environments. Templatized software environments will not sacrifice flexibility or lead to unsustainable virtual image proliferation.
Monetization allows you to turn a non-revenue-generating item into cash. Monetization can provide opportunities to create income from a variety of sources, such as embedded ads.
Conclusion
By following a clear and actionable onboarding process, all that is left for the MSP is to maintain great services and customer relations. When executed properly, that shouldn’t be difficult at all. Establish trust and strong ties from the start and the rest should flow along easily.
Automation has become the backbone for businesses wanting to stay afloat in the highly competitive markets today. Managed Service Providers (MSPs) are among those reaping heavily from IT automation processes.
The benefits MSPs obtain from IT automation include lower costs, reduced errors, and increased productivity. With automation, MSPs can acquire better data, become more reliable, and scale their operations.
What Is IT Automation?
IT automation refers to the process of developing systems and software solutions to minimize manual interventions and automate repetitive tasks. During the automation process, the software is set up to carry out procedures and repeat instructions. As a result, it enables the IT staff to have more time for other strategic work.
The possible applications of IT automation for MSPs are unlimited. However, the most common ones include:
- Cloud automation
- Security automation
- Network management
- Resource provisioning
- Configuration
Top Reasons Why MSPs Benefit From IT Automation
MSPs utilize different tools to streamline daily operations and handle customers with a wide array of tasks. Some duties are repetitive daily routines that need to be handled promptly to guarantee best practices.
With that, IT automation enables MSPs to do more with less. With the help of IT automation, most MSPs have become more effective and efficient without incurring extra expenditures. Here are seven reasons why MSPs are benefiting from IT automation.
1. IT Automation Lowers Costs
Even though the initial cost of installing an IT automation system can be high, managed service providers end up saving more in the long run. Most MSPs have redundant tasks that may call for additional labor, but it’s expensive to acquire human resources to do those tasks. The adoption of IT automation enables MSPs to manage more customers without incurring the extra costs of hiring more staff.
With IT automation, MSPs are also performing these tasks at a higher level. They stay ahead of the competition by constantly delivering quality service at a favorable price. With that, the operational costs are often low.
2. Automated Tasks Have Fewer Errors
Mistakes are often inevitable with humans, no matter the level of training. Even the most qualified experts make mistakes. But unlike humans, automated systems don’t make mistakes. MSPs minimize the risk of errors by automating repetitive, manual tasks. Doing so helps free up skilled personnel and allows them to focus their expertise on more strategic and critical projects.
3. Automation Improves Reliability
Automated machines and systems are more reliable than humans and help MSPs deliver a more consistent customer experience. IT automation solutions are ever-present and guarantee consistency every time. Customers feel great knowing that they will receive the same results each day. With IT automation, MSPs become more resilient.
4. Automation Helps Scale MSPs
As technology advances rapidly, MSPs must upgrade their systems to remain competitive. However, scaling their business can be an uphill task, especially for new IT companies. Operational costs are often on the higher side for various processes, getting even more expensive when the processes are manual.
Automation makes the scaling process more achievable and makes it easier to accommodate the rising number of customers without hiring more staff. As business needs continue to change, including customer demands, MSPs with automation solutions are always ready to accommodate those changes. However, this can be challenging for those MSPs operating manually. They may incur additional costs of training their staff to meet the ever-changing demands. Those training costs add up to their operational costs, hindering growth.
5. Increased Productivity
Unlike humans, computers can work the entire day without taking breaks and still deliver quality results. Since MSPs are expected to handle so many tasks for different customers, IT automation solutions help them achieve exactly that and more. While the automation carries out repetitive processes, the staff is able to put their effort into primary operations, enhancing efficiency and productivity.
6. IT Automation Provides Better Data
MSPs with automated systems collect data that is accurate and reliable, and the automation software enables them to analyze the data and utilize it properly. Automation tools such as sentiment analysis create complex operational reports and give MSPs a deeper insight into various aspects of their business. This is something that can be hard to achieve with manual systems. Manual systems collect a large amount of data, but because of poor analysis and processing of the data, MSPs only end up using a small portion of it.
7. IT Automation Update/Make Changes Faster
IT automation solutions eliminate the use of manual, time-consuming tools like spreadsheets. The software also enables MSPs to automatically send email notifications to ensure that all of the participants are on the same page. With change management software, MSPs track all changes they make to ensure they don’t alter service levels. In other words, you gain control over every process, including regulatory compliance.
Best Practices for Successful Managed IT Services
Even though there are many benefits of IT automation, MSPs still have to adhere to other best practices to attain the highest return on investment. They include:
- Proactiveness – MSPs need to think of other ways to boost business value through IT automation. They can make use of resolution statistics and reporting capabilities to enhance value gain.
- Standardization – MSPs can achieve this through listing standard practices that apply to all clients, and later putting in place automation strategies to help them streamline operations further. Their focus should remain on creating a consistent and exceptional customer experience.
- Practice Policy Management – MSPs need to develop a proactive system with robust policy management. It will help them adhere to IT governance, compliance, and other regulations. Ensuring compliance will help win the trust of many customers since industry regulations are now more demanding on MSPs. This results from the rise of cloud technology and the growing concerns about data security.
- Continuously Optimize and Review – Automation plays a huge role in eliminating day-to-day repetitive tasks, but MSPs need to plan for the remaining duties. The best way to do it would be to continuously look for ways to reduce costs while delivering top-notch services. In order to remain on top of the highly competitive markets, MSPs have to optimize their operations and attract more customers through consistent quality services.
Bottom Line
With the automation of repetitive tasks, engineers are able to better collaborate, handle more business-critical technical assignments, and identify the tasks that require urgent intervention quickly.
The benefits of acquiring and maintaining IT automation solutions are immense. MSPs need to constantly upgrade their systems while looking for new ways to remain on top of the highly competitive market.
People around the world depend on Managed Service Providers (MSPs) to keep their businesses running like clockwork, even as their IT infrastructure evolves. Keeping workflows efficient leads to higher profits, but this can be a challenge due to a mix of on-premises infrastructures, public and private cloud, and other complex customer environments. The shift to remote work in 2020 due to the COVID-19 pandemic has only made this more challenging for MSPs. In order to adapt, more and more MSPs are investing in IT infrastructure monitoring and artificial intelligence (AI).
Keep reading for an overview of the LogicMonitor AIOps Early Warning System and how dynamic thresholds can mitigate these common challenges and add value for MSPs.
The AIOps Early Warning System
The AIOps Early Warning System intelligently detects signals from noise. This helps you identify where to efficiently allocate your engineers’ time. Quickly identifying these signals also helps you resolve issues faster. The Early Warning System consists of four main components: anomaly detection, dynamic thresholds, topology, and root cause analysis.
Anomaly Detection
Anomaly detection lets you visualize expected performance and compare it to historical offsets. Within the LogicMonitor platform, you can pop out an anomaly detection view to see whether what you’re seeing is normal performance or an anomaly. This saves engineers time in the troubleshooting process by allowing them to eliminate metrics or indicators that aren’t relevant.
Dynamic Thresholds
Dynamic thresholds expand on the visual anomaly detection that we offer in our metrics. Dynamic thresholds limit alert notifications based on normal behavior, such as knowing that the CPU on a server is always hot during a certain time of day. Since it detects and alerts on deviations from normal behavior, dynamic thresholds will allow you to catch deviations like a server CPU going to 0% when it is supposed to be busy.
Topology
Topology automatically discovers relationships for monitored resources. It is a key part of the next component, root cause analysis.
Root Cause Analysis
Root cause analysis leverages topology to limit alert notifications. It identifies what the root incident is and group alerts due to dependencies. For example, if a firewall goes down, LogicMonitor knows what other things depend on the firewall and will only send one alert instead of many.

How Dynamic Thresholds Add Value For MSPs
Combined with other features from LogicMonitor’s Early Warning System, dynamic thresholds can help MSPs more proactively prevent problems that result in business impact. Let’s dive a little deeper into why dynamic thresholds are a key component in issue detection.
#1- Increase Productivity
The biggest benefit of dynamic thresholds is the fact that it saves engineers time. By detecting a resource’s expected range based on past performance, dynamic thresholds reduce alert noise and only send alerts when an anomaly occurs. This means that the alerts that engineers receive are meaningful. They spend less time looking at alerts and can help more customers.
#2- Resolve Issues Faster
Dynamic thresholds don’t make you wait for the static amounts to be hit, which could take hours or days. It quickly detects deviations and determines whether the alert is a warning, error, or critical. As soon as an anomaly is detected, an alert is sent to get human eyes on it. Being able to hone in on the exact cause of the alert provides engineers with more context so issues can be resolved faster.
#3- Reduce Costs
Along with saving time and resolving issues quicker, dynamic thresholds also allow MSPs to reduce costs. Experienced engineers, which are expensive, no longer need to handle monitoring and can focus on other areas of the business. Dynamic thresholds make the task of chasing thresholds easier, and less experienced engineers are empowered to do monitoring and really understand what’s going on and where their attention needs to be focused. Less experienced engineers using less time to figure out issues means more money in your pocket.

The intelligence of dynamic thresholds combined with LogicMonitor’s comprehensive monitoring coverage ensures that MSPs have the visibility they need to succeed even in the most complex of environments. To learn more about how LogicMonitor can reduce costs and accelerate workflows for MSPs, check out this on-demand webinar.
The job market for IT professionals right now is challenging. Whether you’re seeking your first job in IT or looking to further your career into a more pronounced and distinguished role, certifications serve as a way to separate from the crowd of applicants. Certifications show a functional level of proficiency, often making them more valuable than college degrees in certain entry-level positions, and just as valuable as years of experience in more established roles. Certifications show functional knowledge of technologies, with labs and virtual environments that set users up to hit the ground running; often more useful early in a career than learning theory in a college classroom environment.
Looking at certifications can be daunting. The first step should be to visualize an intended career path and search for related jobs and the qualifications necessary for those roles. Microsoft, Google, Citrix, and others have shifted their entire certification process to focus more on roles than on specific technologies, with distinct paths for DevOps, system administrators, network engineers, and others. These role-based certifications are great both for landing that first job, as well as changing responsibilities and promotions.
For 2021 and beyond, certifications in cloud and containers will offer the most for the future-focused, while the CCNA, Net+, and Sec+ sit as the most important for entry-level IT positions. Learn more about some of the most popular and in-demand certifications below:
General IT Certifications
Entering the world of IT and associated certifications should probably start with CompTIA certifications. Sec+, Net+ and/or A+ are all highly important, and constantly in demand as a prerequisite for a variety of entry-level roles and career advancement. The A+ teaches foundational knowledge in IT to prepare applicants for a variety of different IT careers and is often used to attract candidates with no professional IT experience.
The Sec+ is the gold standard for starting a career in IT Security, while the Net+ is great for getting a foot in the door to networking roles.
Learn more about CompTIA Certifications:
- CompTIA A+ Certification – Learn more
- CompTIA Network+ Certification (Net+) – Learn more
- CompTIA Security+ Certification (Sec+) – Learn more
Cisco Certifications
Across IT, the CCNA may be the most highly recommended network certification and offers the perfect all-around crash course that can be extensible into furthering your career. The CCENT, another popular Cisco certification, was recently molded into the CCNA, making the CCNA even more important for networking roles. Additionally, Cisco offers a variety of different certifications based on different career tracks. The CCNP, CCIE, and CCAr all also serve as fantastic advanced certs, depending on the career path. The only catch with the certifications is that they are all, obviously, Cisco specific. Nevertheless, the foundation and concepts, especially in the CCNA track, make it the most required certification in networking-related job applications.
Learn more about Cisco Certifications:
- CCNA
- CCT
- DevNet Associate
- CyberOps Associate
- Devnet Professional
- CCNP Enterprise
- CyberOps Professional
- CCNP Collaboration
- CCNP Data Center – 2nd most
- CCNP Security
- CCNP Service Provider
- CCDE
- CCIE Enterprise Infrastructure
- CCIE Enterprise Wireless
- CCIE Collaboration
- CCIE Data Center
- CCIE Security
- CCIE Service Provider
- CCAr
Azure Certifications
Microsoft is moving to role-based certifications and has announced its popular MCSA, MCSD, and MCSE programs will be sunsetted on January 31, 2021. Originally planned to be retired on June 30, 2020, due to COVID-19, the certifications have been extended. With this change, Microsoft’s certifications are stepping away from networking and moving more towards Azure and cloud-based proficiency. Of these, the Azure Solution Architect shows expert skill levels for cloud-based roles and can provide a leg up in DevOps applications as well.
Learn more about Microsoft Certifications:
- MTA (offered in various fundamental concentrations)
- Azure Administrator Associate
- Azure Developer Associate
- Azure Data Engineer Associate
- Azure Security Engineer Associate
- Azure Data Scientist Associate
- Azure Database Administrator Associate
- Azure Solution Architect Expert
- Data Analyst Associate
- Azure DevOps Engineer Expert
- Azure AI Engineer Associate
AWS Certifications
With the whole world moving to the cloud, AWS certifications have never been more in need. The most sought after certification is the AWS Solutions Architect, which is offered at both the associate and more advanced professional levels. AWS has the most market-share of cloud, so between Azure, GCP, and AWS, AWS is the one that will be the most applicable to most companies. In addition to the AWS Solutions Architect track, AWS also offers developers and DevOps specific tracks for more depth into each role.
Learn more about AWS Certifications:
- AWS Cloud Practitioner
- AWS Solutions Architect Associate
- AWS Solutions Architect Professional
- AWS SysOps Administrator Associate
- AWS Developer Associate
- AWS DevOps Engineer Professional
Google Cloud Platform Certifications
Google offers a series of different certifications, but their GCP related ones are the most sought after in DevOps and cloud-related roles. In addition to their GCP certs, they also offer more entry-level IT professional training through Coursera. The Associate Cloud Engineer certification opens the door to the professional level certs which, similar to Azure, offer more role-based certifications depending on career goals.
Learn more about GCP Certifications:
- Associate Cloud Engineer
- Cloud Architect
- Cloud Developer
- Data Engineer
- Cloud DevOps Engineer
- Cloud Security Engineer
- Cloud Network Engineer
- Collaboration Engineer
- Machine Learning Engineer
Red Hat Certifications
Red Hat offers Linux-based certifications and is highly recommended for those in the Sysadmin career path. The RHSCA specifically provides a great foundation for Linux based systems analysis and can help kickstart a career. Although there are not as many jobs available for Linux System specific roles, the need for the jobs currently outweighs the number of applicants, and earning Red Hat certifications can move a resume to the top of the pile.
Learn more about Red Hat Certifications:
- Red Hat System Administrator (RHCSA)
- Red Hat Certified Engineer (RHCE)
- Red Hat Certified Architect (RHCA)
- Red Hat Certified Enterprise Application Developer
- Red Hat Certified Enterprise Microservices Developer
- Red Hat Certified Specialist in Configuration Management
- Red Hat Certified Specialist in Ansible Automation
- Red Hat Certified Specialist in Security – Linux
- Red Hat Certified Specialist in Security – Containers
- Red Hat Certified Specialist in Containers for Kubernetes
- Red Hat Certified Specialist in OpenShift
- Red Hat Certified Specialist in Virtualization
VMware Certifications
VMware offers four tiers of certifications, most prominently the Vmware Certified Professional (VCP) roles. VCP certifications are role-based and cover a variety of topics within VMware, the most extensible and used being the VCP-DCV and VCP-NV. In addition to the VCP level, VMware also offers VMware Certified Advanced Professional (VCAP) and VMware Certified Design Expert (VCDX) certifications for most tracks.
Learn more about VMware Certifications:
- VCP-DCV VMware Certified Professional – Data Center Visualization (also offered in VCAP and VCDX)
- VCP-NV VMware Certified Professional – Network Virtualization (also offered in VCAP and VCDX)
- VCP-CMA VMware Certified Professional – Cloud Management and Automation (also offered in VCAP and VCDX)
- VCP-DTM VMware Certified Professional – Desktop Management (also offered in VCAP and VCDX)
- VCP-DW VMware Certified Professional – Digital Workspace
- VCA-DBT VMware Certified Associate – Digital Business Transformation
Citrix Certifications
While Cisco is great, it’s best not to be tied down to one technology, and branching out into different disciplines is important. Enter Citrix. Citrix offers high-quality training for Networking and digital workspace related certifications. Amongst them, the CCP-N (not to be confused with the AWS CCP) provides depth into Citrix and is great to show proficiency in networking over just the Cisco CCNA.
Learn more about Citrix Certifications:
- CCA – N: Citrix Certified Associate
- CCP – N: Citrix Certified Professional
- CCE – N: Citrix Certified Expert – Networking
- CC – SDWAN: Citrix SD-WAN Certified
- CC – XENSERVER: Citrix XenServer Certified
Container Certifications
Application and role-specific certifications can help stand out in the job market for carving out more exact career goals. The most in-demand career path right now: containers. Docker and Kubernetes both offer their own container-specific certifications. Along with cloud roles, container knowledge stands as the most needed right now and for the future.
For a full list of available Kubernetes Certifications, check out our blog explaining all the levels.
Learn more about Container Certifications:
- Docker – DCA Certification: Learn more
- Kubernetes – CKA Certification: Learn more
Python Certifications
Python is easily the most desired coding language and skillset that IT professionals look for. The Python Institute certifications are a great way to learn Python and show off a functional knowledge of the language. With it’s growing popularity, the Python Institute expanded its program in 2020 to offer the PCPP, Certified Professional in Python Programming. It has two levels, 32-1XX and 32-2XX. The 32-2XX will be offered starting in late 2020.
Learn more about Python Certifications:
- PCEP – Certified Entry Level Python Programmer
- PCAP – Certified Associate in Python Programming
- PCPP – Certified Professional in Python Programming PCPP-32-1XX and PCPP-32-2XX
- CEPP – Certified Expert in Python Programming
Other Certifications
It’s important to focus on certifications based on the career you want. Almost every niche and technology has a specific certification attached to it that can help teach about the technology, as well as look good on a resume. Visualize the IT professional career you have in mind, and work backward to find the most needed certifications to excel in the role.
Automating client onboarding can eliminate the tedious task of cloning dashboards, creating group directory structure, setting up reports, and configuring access roles. All of these tasks are prone to human error and to put it mildly, not really fun to do. In this blog, we’ll walk through a PowerShell script to automate some of these tasks. Specifically, the creation of the Client Group Structure with multiple NOC locations, dashboard groups, the cloning of standard dashboards, and the creation of a read-only user role for new clients when they access the LogicMonitor portal.
I do use a function named Send-Request written by a colleague of mine, Jonathan Arnold, which is available on the official Monitoring Recipies Github. This function simplifies the API request to a one-line Send-Request $resourcePath $httpVerb $queryParams $data
The code snippets in this blog will be defining these variables along with code logic to accomplish specific tasks.
Creating the Resource Directory Structure
The first problem that needs to be addressed is the different NOC locations. MSPs will normally have a standard folder structure used across all clients for formality. However, clients will have multiple NOCs that need to be accounted for. I used an array and a do loop to solve this.
<# Client Info #>
$Client = 'Woeber Capital'
<# NOC Locations #>
$SitesArrray = @("Atlanta","DR","Florida","Home Routers","LA","NYC")
In my group creation portion of the script, I first build the Root Client Folder using our LogicMonitor REST API Developer’s Guide.
#Main Client Group
$data = '{"name":"' + $Client + '","parentId":' + $clientDir + '}'
$resultsGrp = Send-Request $resourcePath $httpVerb $queryParams $data
*$clientDir is the ID of the main Client group and is optional
Then use a foreach loop to create the groups based on the NOC location and under each of these groups create a standard group structure of “System | Network | DR” groups under the NOC. The $resultsGrrp.id is the Device Group ID created in the previous step.
<# Static Groups creation Loop through Locations #>
foreach ($Site in $SitesArrray){
#Main Location Group
$data = '{"name":"' + $Site + '","parentId":' + $resultsGrrp.id + '}'
$resultsLocation = Send-Request $resourcePath $httpVerb $queryParams $data
Write-host $data
#System
$data = '{"name":"System","parentId":'+ $resultsLocation.id + '}'
$resultsSubdir = Send-Request $resourcePath $httpVerb $queryParams $data
#Network
$data = '{"name":"Network","parentId":'+ $resultsLocation.id + '}'
$resultsSubdir = Send-Request $resourcePath $httpVerb $queryParams $data
#DR
$data = '{"name":"DR","parentId":'+ $resultsLocation.id + '}'
$resultsSubdir = Send-Request $resourcePath $httpVerb $queryParams $data
}
Creating the Dashboard Structure and Cloning Default Dashboards
Next, create the Dashboard group with a preconfigured ##defaultResourceGroup## token
that points back to the group structure we just created. This way any dashboards cloned to this Dashboard group will be automatically configured to point to the client group structures previously created.
# Add Dashboad Group with Widget Token
$resourcePath = "/dashboard/groups"
$queryParams = ""
$data = '{"name":"' + $Client + '","parentId":' + $dbGroup + ',"widgetTokens":[{"name":"defaultResourceGroup","value":"CLIENTS/' + $Client + '"}]}'
Write-Host $data
$resultsDashboards = Send-Request $resourcePath $httpVerb $queryParams $data
Cloning Dashboards has a few tricky parts. The first is the widget sizes are not saved in the dashboard itself, but in the widgets. It’s possible to get around this by getting the widget configuration from the original dashboard and patch the cloned dashboard with the correct widget size information. Details of how this can be done can be found in our Dashboard Developer API Guide. The below code accomplishes this.
# Loop through the widgets to build WidgeConf JSON
$i = 0
DO
{
$widId = $orgDashboard.items[$i].id
$widgetCof = $orgWidSize.widgetsConfig.$widId
$WidConf = $WidConf + '"' + $clonedDashboard.items[$i].id + '" :' + '{"col":' + $WidgetCof.col + ',"sizex":' + $WidgetCof.sizex + ',"row":' + $WidgetCof.row + ',"sizey":' + $WidgetCof.sizey + "}"
if ($i -lt ($clonedDashboard.total -1)){$widConf = $WidConf + ","}
$i++
} While ($i -lt $clonedDashboard.total)
write $data
# Patch Dashboard with WidgeConfig JSON
$httpVerb = "PATCH"
$resourcePath = "/dashboard/dashboards/" + $cDashboard.id
$queryParams = ''
$data = '{"widgetsConfig":{' + $widConf + '}}' #| ConvertTo-Json -depth 3
$resizeDashboard = Send-Request $resourcePath $httpVerb $queryParams $data
To specify which dashboards to clone and to accommodate complex directory structures it is possible to use arrays. In the below example you can see arrays representing three dashboard groups. One for the root client level and two for subgroups, Microsoft and VMware. The array contains the original dashboard IDs to be cloned. This is the same technique used in the NOC grouping structure previously. The simplest way to find the Dashboard ID is in the URL, an example “/santaba/uiv3/dashboard/index.jsp#dashboard=7”, the Dashboard ID is 7.
#Root level Dashboarrds
$DBToCloneArrray = @(7,54,62,42,73)
# Microsoft Dashbaords
$msDashboards = @(295,354,122)
# VMWare Dashboards
$vmDashboards = @(294,165,456)
The full dashboard section is the most complicated part of the onboarding script. Below is the full code for this section and it uses the above root level dashboard array.
<# Clone Root Level Dashboards #>
foreach ($DB in $DBToCloneArrray){
#Get Dashboard Name
$httpVerb = “GET”
$resourcePath = “/dashboard/dashboards/" + $DB
$queryParams = '?fields=name'
$data = $null
$orgDbName = Send-Request $resourcePath $httpVerb $queryParams $data
write $resourcePath
write "testing = " + $orgDbName
# Clone the dashboard
$httpVerb = "POST"
$resourcePath = "/dashboard/dashboards/" + $DB + "/clone"
$queryParams = ''
$data = '{"name":"' + $orgDbName.name + '","sharable":true, "groupId":' + $resultsDashboards.id + '}'
$cDashboard = Send-Request $resourcePath $httpVerb $queryParams $data
write $data
# Original dashboard Widget Size, Sort by name to ensure array in same order as Cloned Dashboard
$httpVerb = “GET”
$resourcePath = “/dashboard/dashboards/" + $DB
$queryParams = ‘?fields=widgetsConfig’
$data = $null
$orgWidSize = Send-Request $resourcePath $httpVerb $queryParams $data
$orgWidSize | ConvertTo-Json -Depth 6
# Original Dashboard Widget List, Sort by name to ensure array in same order as original Dashboard
write "Orriginal Widget List"
$httpVerb = “GET”
$resourcePath = “/dashboard/dashboards/" + $DB + "/widgets”
$queryParams = '?fields=id,name&sort=+name'
$data = $null
$orgDashboard = Send-Request $resourcePath $httpVerb $queryParams $data
$orgDashboard | ConvertTo-Json
# Cloned Dashboard Widget List
write "Cloned Widget List"
$httpVerb = “GET”
$resourcePath = “/dashboard/dashboards/" + $cDashboard.id + "/widgets”
$queryParams = ‘?fields=id,name&sort=+name'
$data = $null
$clonedDashboard = Send-Request $resourcePath $httpVerb $queryParams $data
$clonedDashboard | ConvertTo-Json
# Build the WidgeConfig JSON
$WidConf = ''
# Loop through the widgets to build WidgeConf JSON
$i = 0
DO
{
$widId = $orgDashboard.items[$i].id
$widgetCof = $orgWidSize.widgetsConfig.$widId
$WidConf = $WidConf + '"' + $clonedDashboard.items[$i].id + '" :' + '{"col":' + $WidgetCof.col + ',"sizex":' + $WidgetCof.sizex + ',"row":' + $WidgetCof.row + ',"sizey":' + $WidgetCof.sizey + "}"
if ($i -lt ($clonedDashboard.total -1)){$widConf = $WidConf + ","}
$i++
} While ($i -lt $clonedDashboard.total)
write $data
# Patch Dashboard with WidgeConfig JSON
$httpVerb = "PATCH"
$resourcePath = "/dashboard/dashboards/" + $cDashboard.id
$queryParams = ''
$data = '{"widgetsConfig":{' + $widConf + '}}' #| ConvertTo-Json -depth 3
$resizeDashboard = Send-Request $resourcePath $httpVerb $queryParams $data
Creating the Client User Access Roles
Now that everything is created and we have all the needed information, we can create the user security role that gives read-only access for the new clients. This is simply creating a new role with “operation”:”read” for the client groups created with the script. Details for creating User Access Roles are in the roles section of our API developer Guide.
# Add the Company Role
$resourcePath = "/setting/roles"
$queryParams = ""
$data = '{"name":"' + $Client + '","privileges":[{"objectType":"dashboard_group","objectId":' + $resultsDashboards.id + ',"objectName":"' + $Client + '","operation":"read"}, {"objectType":"host_group","objectId":' + $results.id + ',"objectName":"' + $Client + '","operation":"read"}, {"objectType":"report_group","objectId":' + $resultsReport.id + ',"objectName":"' + $Client + '","operation" : "read"},{"objectType":"help","objectId" : "feedback","objectName":"help","operation":"read"},{"objectType":"help","objectId" : "document","objectName":"help","operation":"read"},{"objectType":"help","objectId":"training","objectName":"help","operation":"read"}]}'
Write-Host $data
$resultsRole = Send-Request $resourcePath $httpVerb $queryParams $data
We’ve discussed the different sections on creating an MSP onboarding script with examples on how to code them. The tools outlined should make it possible to develop onboarding scripts flexible enough for custom MSP onboarding needs and automate a rather tedious process. For more details and Professional Service options please reach out to your Custom Success Manager.
This is a guest blog post by Patrik Nordlund, Infrastructure Manager at Retune AB. Retune AB is a Managed Service Provider based out of Stockholm, Sweden, and has been using LogicMonitor to help customers modernize monitoring for hybrid infrastructure. Patrik has been working in IT since 2003, handling everything from clients to servers to networking.
Retune AB manages a variety of Ubiquiti devices — wireless data communication products for enterprise and wireless broadband providers. Naturally, we wanted to bring these in under monitoring. However, Ubiquiti does not expose real-time CPU or memory metrics through SNMP in a way that we found reliable and these are some of the key values needed to verify the health of the device. We have had incidents where memory and CPU have spiked and stayed there and only after a reboot were the resources released. We use the alerts triggered in LogicMonitor so that we can take action as quickly as possible.
After a quick web search, we found that other Ubiquiti users had found some unofficial OIDs to get average usage at one minute, five minute, and fifteen-minute intervals for CPU, but these values did not work correctly on all our devices. Also, we were completely blind when trying to view memory. In the official UBNT-Unifi-MIB, there is no mention of CPU nor Memory.
Thanks to LogicMonitor’s extensibility, we were able to find an easy workaround by running a PowerShell script from the Collector. Here’s how we did it:
- Install the POSH-module, posh-ssh, on the collector so SSH can be used in PowerShell
- Connect to the access point
- Authentication is done with an SSH-key.pair. The public key is uploaded to the AP (via Unifi controller) and the private key is stored in the Collector, protected with a strong passphrase
- You can edit $keyFile in the script, or put your private key in your Collector’s installation directory at: LogicMonitor\Agent\bin\Local_Disk_On_Collector\privatekey_LM.key
- Passphrase and username are added in LogicMonitor as properties unifi.sshuser and unifi.sshpassphrase.key so they are not exposed directly in the script
- Use the native Linux commands to get metrics for CPU and Memory
- Collected data is formatted to calculate percentage value and then returned to LogicMonitor
Script for CPU Usage
$username = "##unifi.sshuser##" $passwd = "##unifi.sshpassphrase.key##" $secpasswd = ConvertTo-SecureString $passwd -AsPlainText -Force $device = "##system.ips##" $keyFile = "Local_Disk_On_Collector\privatekey_LM.key" $creds = New-Object System.Management.Automation.PSCredential ($username, $secpasswd) # SSH-session to device New-SSHSession -ComputerName $device -Credential $creds -Keyfile $keyFile -AcceptKey | Out-Null # Get CPU stats $cpuAll = Invoke-SSHCommand -index (Get-SSHSession -host $device).sessionid -Command "cat /proc/stat | grep '^cpu '" # Drop session Remove-SSHSession (Get-SSHSession -host $device) | Out-Null # Calculate CPU usage $cpuArray = $cpuAll.Output -split " +" $cpuTotal = ([int]$cpuArray[1]) + ([int]$cpuArray[2]) + ([int]$cpuArray[3]) + ([int]$cpuArray[4]) + ([int]$cpuArray[5]) + ([int]$cpuArray[6]) + ([int]$cpuArray[7]) + ([int]$cpuArray[8]) + ([int]$cpuArray[9]) + ([int]$cpuArray[10]) $cpuIdle = ([int]$cpuArray[4]) $cpuUsage = ($cpuTotal-$cpuIdle)/$cpuTotal $cpuUsedPercent = $cpuUsage*100 $cpuPercent = ([int]$cpuUsedPercent) Write-Host "CPUUsage=${cpuPercent}" Exit 0
Script for Memory Usage
$username = "##unifi.sshuser##" $passwd = "##unifi.sshpassphrase.key##" $secpasswd = ConvertTo-SecureString $passwd -AsPlainText -Force $device = "##system.ips##" $keyFile = "Local_Disk_On_Collector\privatekey_LM.key" $creds = New-Object System.Management.Automation.PSCredential ($username, $secpasswd) # SSH-session to device New-SSHSession -ComputerName $device -Credential $creds -Keyfile $keyFile -AcceptKey | Out-Null # Get Memory stats $memAll = Invoke-SSHCommand -index (Get-SSHSession -host $device).sessionid -Command "free | grep 'Mem:'" # Drop session Remove-SSHSession (Get-SSHSession -host $device) | Out-Null # Calculate memory usage $memArray = $memAll.Output -split " +" $memTotal = $memArray[1] $memUsed = $memArray[2] if ($memTotal -eq "Mem:"){ $memTotal = $memArray[2] $memUsed = $memArray[3] } $memUsedPercent = [int](($memUsed/$memTotal)*100) Write-Host "MemoryUsed=${memUsedPercent}" Exit 0
Our team noticed there were slight differences in the output from the Linux commands, which means that you might have to tweak the scripts to suit your specific devices.
It is possible to get these metrics from the Unifi controller via the API. However, in this case, each device will generate a question to the controller causing an influx of data as well as a single point of failure for your network monitoring. Thanks to LogicMonitor, we have the capability to monitor what is important to our business and now confidence in the metrics we are receiving for Ubiquiti devices.
These modules have been published and are available via the LM Exchange, a repository for customers to exchange datasources, and can be found using the codes 6H7TE3 and J9NFZJ. If you’re interested in learning more about how LogicMonitor can help in your environment, sign up here for a free trial or demo.
Too Many Tools
Birmingham-based TekLinks is one of the top managed service providers (MSPs) in the world. A long-time LogicMonitor customer and partner, TekLinks owns and operates three data centers and services more than a thousand customers, making it a true enterprise. The success and rapid growth of the company, along with business acquisitions and individual software procurement resulted in a classic case of “too many tools” for the sophisticated MSP.
MSP monitoring solutions need to do more than just provide visibility. They need to extract and deliver powerful business insight to drive results. For TekLinks to grow to the next level successfully, they needed to simplify their toolset, consolidate vendors and empower users within the organization with insight into how their entire IT infrastructure was performing….from data centers to devices and applications. The combination of a thoughtful approach, considered processes and the implementation of LogicMonitor, resulted in a very successful tool consolidation project. They were guided by three steps to get there:
#1. Create a single source of alerting for internal operators and customers
TekLinks had multiple monitoring systems in place prior to consolidation. Some solutions provided text alerts, others sent alerts exlusively via email, and some didn’t alert at all. There was no single way to have visibility into all the systems by everyone on the team. And the lack of key platform features like role-based access control and multi-tenancy meant their customers had no visibility into the systems being managed.
The requirement seemed simple: allow every team and individual in the organization to have access to the same information to address issues, and allow customers access to the same information, securely. But did such a solution exist? None of their existing monitoring solutions fulfilled this basic requirement.
#2. Identify services architectures
TekLinks needed to understand all the various services architectures and their dependencies to truly understand the scope of the monitoring that needed to be deployed. In some cases, TekLinks had hardware that operated in a silo, and the monitoring system needed to see into every layer of the hardware stack. Additionally, in TekLink’s case, every customer needed its own security zone, so the chosen monitoring tool could not be deployed in a centralized network with firewalls and private connections to gain the needed visibility. That would be too complex.
To add to the architecture complexity, TekLinks works with some of the largest hardware vendors in the world in addition to several highly customized, open-source technologies. The deployment needed to work with everything from basic Cisco switches to temperature control systems in the datacenter. The LogicMonitor platform was the only solution that met these requirements. Beyond that, because LogicMonitor is SaaS-based, there was no need to allocate any of the existing infrastructure to run the monitoring platform. The agentless solution sits outside of the system being monitored, which means it stays running even during an outage.
#3. Understand who you want to notify, and how
TekLinks not only needed a “single source of truth” for internal operations, they also required it to provide operational transparency to customers. Thanks to LogicMonitor’s multi-tenancy and granular role-based access control, TekLinks is able to share data used by the NOC team, alerting customers to issues when and where appropriate.
Offering LogicMonitor as a single solution has helped TekLinks grow their business. Because the entire organization can view customer monitoring data, the sales team can identify upsell and cross-sell opportunities and proactively alert their customers with solutions.
The Importance of Business Outcomes
Over the course of this process, TekLinks found that first and foremost, they needed to start from the business outcomes and work backwards. Build a Policy and Procedures Statement that documents how information should flow. Make sure all stakeholders clearly understand these policies and their impact. Making a change will often require buy-in when introducing new tools and procedures. Here, too, LogicMonitor helps ease the transition by allowing product, engineering, sales and management teams see real value in a truly integrated environment.
In the end, TekLink’s consolidation project succeeded not just because they chose the right monitoring solution. It worked because the organization was willing to constantly improve their internal processes. They realized that even one of the top MSPs in the world can get better, and do more to make good on their promises.
Recently at IT Nation I led a panel discussion on the “good, bad and ugly” associated with the trend and experiences of MSPs transitioning to CSPs. This article captures the main takeaways from the panel session. Should MSP owners and operators re-invent their businesses as CSPs and what are the key considerations in doing so?
Is it too late to build vs. buy? Building out a virtualized private cloud is an expensive proposition. The MSP has to determine whether building out a private cloud infrastructure makes sense versus either working with a master private cloud provider (like Artisan Infrastructure) or slicing up and reselling a managed cloud offering from a company like Zumasys, RackSpace or VMWare. Lyf Wildenberg, CEO of MyTech Partners out of Minneapolis, MN, suggested that “Determining to build (a private cloud) means that the MSP must be resourced appropriately in order to handle the change in the support load, NOC operations and infrastructure delivery. A strategic assessment of the MSP’s staff, its supporting infrastructure and toolset capabilities must be completed before deciding to implement a private cloud infrastructure.”
Does an optimal target market customer (to offer cloud services to) exist? Panelist Tommy Wald, former CEO of WhiteGlove Technoligies (acquired by MindShift in 2012) offered a strong opinion. “The complexity of the technology and IT services needs of mid-market enterprise is such that offering a cookie-cutter approach to them is a tough sell. MSPs that serve the SMB market typically offer more commodity service offerings. Those offerings are most easily replicated in a private, multi-tenant private cloud offering. So the MSPs serving the SMB are most well suited to offer a private cloud product.” The other panelists debated whether certain vertical markets could be easily serviced by a private cloud offering. They agreed that certain verticals in which private client data breaches are a concern (legal, finance, etc.) are more appropriate for a single tenant private cloud only if the business opportunity is big enough.
Is cloud cheaper? The panelists all agreed that offering cloud services is not cheaper. According to Jim Lancaster, President of Sagiss in the Dallas MetroPlex suggested “MSPs either are buying equipment and racks for dedicated equipment or paying license fees for a hosted virtualized environment. Either way the expenses are such that the gross margins of the business model are not shifted dramatically either way assuming your basket of services pricing stays constant.” The panelists advised MSPs to look at their pricing and business model aspects against any potential shift in their usual MSP-based services pricing or billing methodology to determine the marginal differences.
What are the risks to consider? The panelists identified several risks in considering a cloud services deployment model. The discussion regarding risks surfaced many tough questions. The panelists unanimously noted that each MSP must ascertain which risks are most prevalent for their own situation. Some of the risks identified include:
Security: Security is a risk for both the MSP and its clients. Is the cloud services infrastructure secure against an external breach? Will clients be concerned about the physical or virtual location of their data? Creating documented and audited procedures for operations processes and security procedures will help put cloud service customers at ease with the transition.
Single point of failure: Like any business consideration, MSPs must be confident in deploying a cloud services deployment model such that it doesn’t risk the predictability of stable MSP business revenue or profits.
Competition from other client service providers: If an MSP is not currently offering a cloud service offering there is a good chance that another of their vendors (for example ISPs or ISVs) might try to upsell cloud services to displace their current managed service offering. To limit this risk, MSPs must maintain close, trusting relationships with their customers and work to own as much of the service delivery stack as is reasonable for their business.
The panel session concluded with great discussion around the topic of “keys for success” in deploying cloud services and completing the transition from MSP to CSP:
Assess your team. According to Kevin Fitzpatrick, Director of Technical Support at CSP Zumasys, “Determine if your current staff is currently capable to manage the cloud business offerings. If they aren’t then you’ll have to assess whether or not they are capable to train to make the transition. If you determine they aren’t capable to make the transition, then what? You’ll need to re-staff or reconsider your potential to move to becoming a CSP.”
Use clear contractual agreements to give customers assurance about the delivered services and their ownership of their data. Transitioning a customer data to the cloud isn’t necessarily an easy process. Customers should also know their rights and their ability to move off the cloud should they choose to do so. Ensure that your contracts are clear. Sure, friction in moving data may provide you some “stickiness” with your customers but you will build trust and goodwill by making sure they know just what it means to move data to the cloud and also how they can get off of it if desired at a later date.
Create value to survive and to thrive. Wildenberg from MyTech Partners put it best, “As a service provider, whether you are a traditional MSP or moving to CSP, all of the risks we’ve talked about and the competition within an accelerating technology market are such that you must absolutely offer and surround each of your customers with a valuable service experience in order to maintain and grow your business.” It’s as simple, and difficult, as that.
Here at LogicMonitor we serve hundreds of managed service providers with our cloud-based performance monitoring platform. “MSPs” range from true blue service providers, to Cisco VARs, to Cloud Providers and System Integrators. MSP monitoring typically means using LogicMonitor to monitor their own equipment and hosted apps. Many also will drop a LogicMonitor collector at a customer site or remote sites to monitor critical customer side infrastructure. Our flexible deployment model avoids the hassle of setting up VPNs that legacy premise-based monitoring tools require.
Lately I’ve spent a lot of time in airports and freeways to better understand our MSP customers, and I’m amazed by their expert use of LogicMonitor and their willingness to share best practices to help other MSPs monitor better (and make more $!). I’d like to thank a couple of MSPs in particular — Sagiss (in Big D) and CIO Solutions (in our hometown of Santa Barbara) — for contributing to help us build a best practices guide for using LM within MSPs. Here’s the first in a series to help MSPs get the most out of LogicMonitor, and hopefully contribute to your success. (more…)
At LogicMonitor, we know datacenter monitoring. We know it because we’ve lived it – I’ve been that guy responsible for making sure that a 24 x 7 x 365 web service was up. Most of our technical staff also came from a SaaS and web ops background (with lots of work in corporate IT worlds, too).
Recently, however, we’ve had quite a few managed service providers adopting LogicMonitor for their monitoring needs. Which makes a lot of sense. An MSP is just as dependent on the uptime, (more…)