The document outlines good practice discussion topics for an IT organization meeting, including user support, change management, self-service, SLAs, metrics, customer feedback, and problem management. For each topic, example questions are provided to facilitate discussion on best practices. Responses from previous discussions are also summarized, highlighting various effective strategies organizations employ in these areas, such as first call resolution training, approval workflows, service catalogs, dashboard reporting, surveys, and root cause analysis.
1. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 1
Good Practice Discussion Topics
Facilitator Guide:
Lead a table-discussion with the group through each of the 7 topic areas below. The objective is to get
a group discussion of good practices used in each topic area to drive performance. Good practices will
be combined into the final report.
Key Points to Remember:
Topics will be approximately 5-6 minutes each.
Focus the discussion on positive actions & best practices; not pain points or challenges.
Ask prompting questions; get details about best practices.
If there is a discussion about pain points, turn it to a positive. “What was your biggest pain point
and how did you improve the situation?”
Take detailed notes that can be written into the best-practice guide.
2. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 2
1. User Support / Service Desk
FACILITATOR QUESTION: What does your IT Organization do really well with regard to User Support?
Examples: First Call Resolution, Cost Efficiency, language support
Good Practices Responses:
FCR / FLR Implementation (Cleveland Clinic)
o Saved them from being outsourced
o Went from 10-15 daily complaints, to 1-2 compliments daily
Call reporting for every call
Surveys for every resolved incident
Service Desk phone calls are only for incidents. Service requests must be submitted by user via
self-help portal (Lubrizol)
Chat bots available for SD technicians. This is applied to their help page and allows a user to get
answers to questions, or be pushed to a help page for submission of a ticket with no call in to SD
required.
First call resolution up to 75%+ due to special onboarding/training processes for new
technicians. Also includes a “graduation style” help desk where they cannot move on to harder
call types until they have a proven history of completion on the easier more common calls.
Apple Genius Bar style tech setup.
Direct Autoticketing by phone call.
Chat usage (Progressive and Key bank) allowing for multi (usually 3+) tickets to be chatted about
with end users at the same time.
Outsource after-hours support instead of relying on internal on-call staff
o Reduce end users wait time
o Increase first contact resolution (many issues resolved by outsource provider)
o Reduce use of internal staff during non-working hours (outsource provider only contacts
on-call for critical/urgent issues/requests)
Knowledge management website for support staff
o Increase first contact resolution
o Reduce contact handling time
Use templates to record incidents and requests
o Reduce contact handling time
Recruit and hire tech-savvy end user customers
o Increase customer focus
Mandate knowledge base use before assigning / escalating (knowledge articles have clear
escalation paths)
o Reduce improper assignment / escalation
o Reduce MTTR
3. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 3
Customer Satisfaction Surveys – used as input to service improvement plans
o Improve services
Ongoing, Regular Incident Analysis to Identify Shift Left Opportunities
o Increase first contact resolution
o Decrease assignments to other support groups
o Reduce MTTR
Use Shift Coordinators / Supervisors
4. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 4
2. Change Control / Change Implementation
FACILITATOR QUESTION: What does your IT Organization do really well with regard to technical change
control / Change Management?
Examples: CAB, Change Awareness, Forward Schedule, Approval requirements, collision detection,
impact analysis
Good Practices Responses:
Communication of all changes are sent out via emails with a standard format, which shows
changes color coded by impact
Use of an E-CAB for emergency changes
Use of production readiness documentation/review
Use of a strictly enforced and automated approval process
Use of pre-CAB review sessions
o Designated team also does previous change/incident analysis and links incidents to
change requests
Use of post-mortem reviews for high impact incidents caused by a change
Use of post-mortem reviews for all ‘failures’, and all ‘successful with issues’
o Correspondeng incidents are sent off to the Problem Management team
Use of a change community/web page for collaboration of change management activities and
documentation
Communicating a forward schedule of changes showing what changes are happening in the
upcoming week
Performing regular change audits. During audits, reviews are done to ensure that changes that
were implemented as Standard meet the requirements for a Standard change, and that change
management policies were followed. For those changes that did not follow policy, a report is
provided to call attention to them.
Use of a change calendar which shows upcoming changes color-coded by impact in a calendar
view (BMC Remedy)
Weekly CAB meetings held, enterprise wide. CAB is used for non-routine changes and can
usually last up to 2 hours. During the CAB however, users are allowed to reject other user’s
changes based on possible collisions in the environment.
Users did note that their CABs don’t have any “teeth” in that if something is done that was not
approved in the CAB, there really is not enough top-down repercussions.
Use of maintenance windows.
Special “Production Readiness CAB” which works on building a release process for a set of
upcoming changes, and going through full collision detection discussions and review with other
groups. During the inter-team reviews they discuss potential impact and possible call volumes
based on the changes. They make sure that enough staff will be on hand to handle the call
5. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 5
volumes, otherwise they push the changes back or rearrange the changes to flatten the call
volume over a larger spread of time.
Documented process
o Details, impact, backout plan, approval, communication, forward schedule of changes
Tool configured to enable process, make it easy for people to submit change requests
Formal CAB
Change Management team dedicated to following and improving process
3. User Self Service / Service Catalog
FACILITATOR QUESTION: What does your IT Organization do really well with regard to User Support Self
Service?
Examples: Consolidated portal, Level 0 Self Help, Knowledge Management, Automated Fulfillment
Good Practices Responses:
Use of a portal was common
Built-in portal which requires approvals for service requests. No option to call the Service Desk
for service requests. If the option is chosen, they are taken to a recording which specifies the
location to access the portal.
Use of a robust search engine
Requiring or encouraging users to use self help (knowledge articles, etc.) before contacting the
Service Desk
Determine more common calls made to Service Desk (password resets, etc.) and rerouting those
calls to self help areas
Utilizing a password reset tool
Utilizing automated ticket creation and routing with a combined ticketing/phone system.
Avante Heat & Avante Heat Phone are utilized to enable call/ticket automation.
Ensuring knowledge base articles are configred correctly for visibility
Giving customers the ability to cancel/re-open requests via portal
Special user portals, “Solution Center”, this allows a location for users to go to select things they
want to order.
Mobile ordering of requests through “Articulate”, being used primarily by IT users currently.
Self Service knowledge base with 10,000+ articles.
User self-re-imaging of laptops if needed.
Shortened call times and volume through user re-trains, and vocalizing self service areas to
users when they call in. User empowerment to fix own issues.
Drop in overall call volume through tool that autoconnects users to printers in the area. Less
calls on printer setup.
To drive increased self-service use
6. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 6
o Focus on educating callers about self-service options, especially password reset
o Include link to self-service portal in email signature
o Customer-facing knowledge written in customer context
4. SLA / OLA
FACILITATOR QUESTION: What does your IT Organization do really well with regard to Service Level
Agreement or Operating Level Agreements?
Examples: Customer SLA’s, scheduled downtime, maintenance windows, Operating level agreements,
Premium service / executive support
Good Practices Responses:
One person spoke of the importance of SLAs as being a consultant. If the SLAs aren’t met, they
don’t get paid. His suggestion was:
o Use templates instead of creating from scratch
o Utilize strong CSI initiatives to catch and fix problems
o Strong metrics utilization is a must, as SLAs are 100% data driven
o Utilize a defined SIAM model
Weekly color-coded reports of the top offenders are sent out to teams and management to call
out the offenders
Top-down accountability is a must
Usage of dashboards with real-time SLA information with different tabs for different
departments. Publish these at the executive level
One organization is making sure that all of IT is ITIL certified
Keep in mind that showing numbers is fine, but explaining WHY the numbers are where they are
is key
“OLAs are for our whiners” was the only thing stated about OLAs in our group
Mean time to restore or detect KPIs.
Target times for resolution published to users.
Priority based resolution times.
Acknowledgement/acceptance-based resolution SLAs. Clock starting when assignee accepts the
ticket and cannot be paused to stop the clock. Ran into issues with users pending tickets and
never restarting them, so ticket times were getting skewed. So they moved to a no clock
stopping model after the acceptance starts the timer.
Built in ticket priority calculator (Clinic), based on questions asked to the end user submitting
the issue. Users cannot skew the answers because they do not know how the answers drive the
priority matrix on the backend. They ask things around problem resolution, break fix impact,
financial impact, locational impact… and the system defines the priority score and puts an SLA
on the ticket accordingly.
7. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 7
Special VIP help lines for heightened SLAs. Number only given out to VIPs and VIP admins.
Short list of VIPs and strict requirements to be placed on list.
5. Metrics & Reporting
FACILITATOR QUESTION: What does your IT Organization do really well with regard to Metrics and Key
Performance Indicators (KPIs)?
Examples: What metrics are used to drive IT Support and staffing decisions? What metrics are used to
evaluate CSI opportunities? What Dashboard tools or other reporting capabilities do you use?
Good Practices Responses:
Accuracy is key
Service Desk reports commonly mentioned:
o Call resolution times
o Cost per ticket
o Cost per service request
o Percentage of headcount to tickets
This one always works when justifying a new requisition
Report automation
Dashboard of real-time metrics displayed on screen to view current ticket status
Usage of Tableau viewer for more robust dashboards
Performance level metrics are created with the top number of items a team/person is
responsible for to develop a scorecard
Reports used for accountability down to the agent level, and this determines personal review
scores and eligibility for shift bids, etc.
Utilizing MTTR reports to measure the efficiency of support processes
Use of event management based auto-ticketing led to a lot of extra “noise” incidents. Made
special reports to trim out the noise and get to a true incident count.
Extra data mining and dollar impact reviews on critical incidents.
Published monthly KPIs on critical incidents, changes, and regular incidents.
Workforce management team with the Service Desk. Identifies capacity management through
data modeling and reporting to properly staff areas based on trend reviews. Also with trend
reviews they identify “what is out of line” in the environment, and see if there are any unusual
spikes in volume (while also keeping in account the size of the area reviewed and standard
number of tickets received).
Reviews of incidents per user count over time, 12 month trending review.
“What has changed” report to review current month vs. prior month and see if there were any
spikes in ticket volume.
Create dashboards showing key performance indicators
8. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 8
Show trends
6. Customer Feedback
FACILITATOR QUESTION: What does your IT Organization do really well with regard to obtaining
user / customer feedback on service?
Examples: How do you get customer feedback? Surveys? Steering Committees / Advisory Groups,
annual surveys? How do you design IT services to meet customer expectations?
Good Practices Responses:
Customer satisfction surveys
Polling of customers
Utilizing focus groups
Quarterly business reviews with customers
Annual executive reviews
Every incident sends out a customer satisfaction review when the incident is closed
A team dedicated to handling complaints. Complaints are scored as ‘fault’ or ‘not at fault’ (i.e., a
customer complains there is no diet pop in the vending machine – that is part of a company
policy, so is a ‘not at fault’ complaint). Complaint reports are sent to the CEO.
Customer surveys for service tickets combined with standard resolution emails and placed into a
single email. This way users are not given a survey after their issue is already fixed (when they
are back up and working and no longer care about the issue). Users receive the information
about their issue being fixed and a quick limited question survey, this way they address their
feedback and do not have to open another email to address it at a later date. This has led to
more than a 45% return on customer surveys for feedback.
After call automated surveys for call in issues that are fixed immediately with first-call
resolution.
Periodic surveys out to end users every other year.
Special surveys out to IT users that also launch a special investigation process to attempt to fix
the IT issue before future issues are reported.
IT Advocacy focus group, which holds regular meetings with end users. Meets quarterly, and
different end user groups/divisions meet with this group each quarter. They address their
concerns and they can express what they may want for future. This is a top-down led group,
where leadership assigns who will be meeting. Attendance is voluntary, but turnout has been
good.
Direct meetings with customers (management by walking around)
Quarterly business review meetings
On-site hypercare support
Personalize / humanize the support organization results in increased feedback
9. i t S M F U S A C l e v e l a n d – H D I N o r t h C o a s t Page 9
o End users feel a connection and are more willing to talk
7. Problem Management
FACILITATOR QUESTION: What does your IT Organization do really well with regard to IT Problem
Management?
Examples: Root cause? Proactive Incident analysis? Trend Analysis? Known Error database?
Good Practices Responses:
Utilizing an RCA process which includes a wide area of scope
Being proactive in tracking down potential problems via trend reports
o Teams dedicated to problem management who review incidents for re-occurnace, track
how many times certain knowledge articles are used, etc.
Utilizing a question calculator to define problem priority
Have a clear definition of what problem is and set formal criteria for problem resolution
Cause analysis for problem tickets reviewed in meetings with multiple effected groups.
Facilitated by a designated set of problem managers. Problem management performance score
linked in with year end reviews, drives for more cooperation and attendance.
Aging reports on tickets, and known errors database for review.
One centralized problem management team (approx. 15 in the group from different areas), with
dedicated problem managers and individual application managers.
Weekly meetings to discuss major/critical incidents from the prior week for after action reviews,
and identification of cost resulting from the critical incident. Helps to put more emphasis on
areas in the environment that may need help and cost more money when things break.
Problem review board who receive trend reports and work towards proactive problem
management.
Formal process
o Problem team focused on facilitating root cause analysis
o Exclude managers to earn trust, focus on ‘what’s in it for me?’
Monthly operational reviews
**Lori’s side Note: I was pleasantly surprised to learn that the small portion of problem management we
utilize here at SW was actually better than most I heard at our table, so we are not as far behind as we
think! That actually goes for change management as well.