In this presentation, we will discuss in details about challenges in managing the IT infrastructure with a focus on server sizing, storage capacity planning and internet connectivity. We will also discuss about how to set up security architecture and disaster recovery plan.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
2. Challenges in Managing
Infrastructure
Planning New set up
Planning new information technology setup
requires lot of study about products and business
needs.Architect need to understand business model
& the requirement of the enterprise .
While designing a server room or data center
following factors are considered.
•Server sizing and load balancing
•Storage capacity planning
•Internet connectivity ,Security Architecture
•BCP/Disaster recovery plan
3. Server Sizing/Deployment
plan
Intel Based /AMD PC architecture Servers are now
used in critical enterprise application deployment
Choosing right type of server & hardware for
required number of users is very important
Blade Server
Blade servers are self-contained computer
servers, designed for high density. Whereas a
standard rack-mount server can exist with (at
least) a power cord and network cable
4. Blade servers
Blade servers have many components removed for
space, power and other considerations while still
having all the functional components to be considered
a computer. A blade enclosure provides services such
as power, cooling, networking, various interconnects
and management - though different blade providers
have differing principles around what should and
should not be included in the blade itself (and
sometimes in the enclosure altogether). Together
these form the blade system.
5. Blade servers
In a standard server-rack configuration, 1U (one rack
unit, 19" wide and 1.75" tall) is the minimum possible
size of any equipment. The principal benefit of, and the
reason behind the push towards, blade computing is
that components are no longer restricted to these
minimum size requirements. The most common
computer rack form-factor being 42U high, this limits
the number of discrete computer devices directly
mounted in a rack to 42 components. Blades do not
have this limitation; densities of 100 computers per
rack and more are achievable with the current
generation of blade systems.
6. Blade servers
In the purest definition of computing (a Turing
machine, simplified here), a computer requires only;
•memory to read input commands and data
•a processor to perform commands manipulating that
data, and
•memory to store the results.
Today (contrast with the first general-purpose
computer) these are implemented as electrical
components requiring (DC) power, which produces
heat. Other components such as hard drives, power
supplies, storage and network connections, basic IO
(such as Keyboard, Video and Mouse and serial) etc.
7. Blade Servers
only support the basic computing function, yet add
bulk, heat and complexity, not to mention moving parts
that are more prone to failure than solid-state
components.
In practice, these components are all required if the
computer is to perform real-world work. In the blade
paradigm, most of these functions are removed from
the blade computer, being either provided by the blade
enclosure (e.g. DC power supply), virtualised (e.g.
iSCSI storage, remote console over IP) or discarded
entirely (e.g. serial ports). The blade itself becomes
vastly simpler, hence smaller and (in theory) cheaper
to manufacture.
8. Blade Servers
Blade servers are ideal for specific purposes such as
web hosting and cluster computing. Individual blades
are typically hot-swappable. As more processing
power, memory and I/O bandwidth are added to blade
servers, they are being used for larger and more
diverse workloads.
Although blade server technology in theory allows for
open, cross-vendor solutions, at this stage of
development of the technology, users find there are
fewer problems when using blades, racks and blade
management tools from the same vendor.
9. Blade Servers
A stack of IBM HS20 blade
servers. Each "blade" has two
2.8 GHz Xeon CPUs, two
36 GB Ultra-320 SCSI hard
drives and 2 GB RAM.
Blade servers are ideal for
specific purposes such as web
hosting and cluster computing.
Individual blades are typically
hot-swappable. As more
processing power, memory
and I/O bandwidth are added
to blade servers
10. Blade Servers
Blade servers are not, however,
the answer to every computing
problem. They may best be viewed
as a form of productized server
farm that borrows from mainframe
packaging, cooling, and power
supply technology. For large
problems, server farms of blade
servers are still necessary, and
because of blade servers' high
power density, can suffer even
more acutely from the HVAC
problems that affect large
conventional server farms.
11. Storage Area Network
In computing, a storage area network (SAN) is an
architecture to attach remote computer storage devices such
as disk array controllers, tape libraries and CD arrays to
servers in such a way that to the operating system the
devices appear as locally attached devices. Although cost
and complexity is dropping, as of 2007, SANs are still
uncommon outside larger enterprises.
(By contrast to a SAN, network attached storage (NAS), uses
file-based protocols such as NFS or SMB/CIFS where it is
clear that the storage is remote, and computers request a
portion of an abstract file rather than a disk block.)
12. Storage Area Network
Most storage networks use the SCSI protocol for
communication between servers and disk drive
devices, though they do not use its low-level
physical interface, instead using a mapping layer
such as the FCP mapping standard.
•Fibre Channel, currently the most common. Comes
in 1Gbit, 2Gbit and 4Gbit variants
•iSCSI, mapping SCSI over TCP/IP
•HyperSCSI, mapping SCSI over Ethernet
•ATA over Ethernet, mapping ATA over Ethernet
13. Storage Area Network
Advantages
Sharing storage usually simplifies storage
administration and adds flexibility since cables and
storage devices do not have to be physically moved
to move storage from one server to another. Note,
though, that with the exception of SAN file systems
and clustered computing, SAN storage is still a one-
to-one relationship. That is, each device, or Logical
Unit Number (LUN) on the SAN is "owned" by a
single computer (or initiator). In contrast, Network
Attached Storage (NAS) allows many computers to
access the same set of files over a network.
14. Storage Area Network
Advantages
SANs tend to increase storage capacity
utilization, since multiple servers can share the
same growth reserve.
Other benefits include the ability to allow servers to
boot from the SAN itself. This allows for a quick
and easy replacement of faulty servers since the
SAN can be reconfigured so that a replacement
server can use the LUN of the faulty server. This
process can take as little as half an hour and is a
relatively new idea being pioneered in newer data
centers.
15. Storage Area Network
Advantages
Server less backup ( 3rd party copying)
This system allows a disk storage device to copy
data directly to backup devices across the high
speed links of the SAN without any intervention
from the server
Lower total cost of Ownership
While initial cost is higher inherent flexibility and
scalability together with reduced management
complexity and cost deliver long term cost benefit
16. Storage Area Network
Advantages
Efficient capacity utilization
By consolidating storage resources and sharing capacity
across multiple servers,SAN generally utilizes 50% more
capacity per storage device than DAS.This further optimizes
the storage spend
Centralized Storage Management
By centralizing the management of all storage
resources,even vast amount of storage can be managed by a
small IT staff.
Superior Data protection :
SAN provide the infrastructure to implement advanced data
protection feature
Increased User productivity
17. Storage Area Network
Advantages
SANs also tend to enable more effective disaster recovery
processes. A SAN attached storage array can replicate data
belonging to many servers to a secondary storage array. This
secondary array can be local or, more typically, remote. The
goal of disaster recovery is to place copies of data outside
the radius of effect of an anticipated threat, and so the long-
distance transport capabilities of SAN protocols such as
Fibre Channel and FCIP are required to support these
solutions
18. Storage virtualization and SANs
Storage virtualization refers to the process of completely
abstracting logical storage from physical storage. The
physical storage resources are aggregated into storage pools,
from which the logical storage is created. With storage
virtualization, multiple independent storage devices, that
may be scattered over a network, appear to be a single
monolithic storage device, which can be managed centrally.
Storage Virtualization is commonly used in SANs.
Virtualization of storage helps achieve location
independence by abstracting the physical location of the
data. The Virtualization system presents to the user a logical
space for data storage and itself handles the process of
mapping it to the actual physical location.
19. Network-attached storage
Network-attached storage (NAS) is the name
given to dedicated data storage technology
which can be connected directly to a computer
network to provide centralized data access and
storage to heterogeneous network clients.
20. Network-attached storage
NAS differs from the traditional file serving and
Direct Attached Storage in that the operating system
and other software on the NAS unit provide only the
functionality of data storage, data access and the
management of these functionalities. Furthermore,
the NAS unit does not limit clients to only one file
transfer protocol. NAS systems usually contain one
or more hard disks, often arranged into logical,
redundant storage containers or RAIDs (redundant
arrays of independent disks), as do traditional file
servers.
21. Network-attached storage
NAS removes the responsibility of file serving from
other servers on the network and can be deployed
via commercial embedded units or via standard
computers running NAS software.
NAS uses file-based protocols such as NFS (popular
on UNIX systems) or SMB (Server Message Block)
(used with MS Windows systems). Contrast NAS's
file-based approach and use of well-understood
protocols with storage area network (SAN) which
uses a block-based approach and generally runs over
SCSI over Fibre Channel or iSCSI.
22. Network-attached storage
(There are other SAN protocols as well, such as
ATA over Ethernet and HyperSCSI, which however
are less common.)
Minimal-functionality or stripped-down
operating systems are used on NAS computers
or devices which run the protocols and file
applications which provide the NAS
functionality. A "leaned-out" FreeBSD is used
in FreeNAS, for example, which is open source
NAS software meant to be deployed on
standard computer hardware.
23. Network-attached storage
Commercial embedded devices and consumer
"network appliances" may use closed source operating
systems and protocol implementations.
The boundaries between NAS and storage area
network systems are also starting to overlap, with some
products making the obvious next evolution and
offering both file level protocols (NAS) and block level
protocols (SAN) from the same system.
An excellent example of this is Openfiler the
opensource product running on Linux. San Magazine
did a very informative review of this hybrid functionality.
24. Network-attached storage
Benefits
Availability of data can potentially be increased with
NAS because data access is not dependent on a
server*: the server can be down and users will still
have access to data on the NAS. Performance can be
increased by NAS because the file serving is done by
the NAS and not done by a server responsible for
also doing other processing. The performance of
NAS devices, though, depends heavily on the speed
of and traffic on the network and on the amount of
cache memory (the equivalent of RAM) on the NAS
computers or devices.
25. Network-attached storage
Benefits
Scalability of NAS is not limited by the number
of internal or external ports of a server's data
bus, as a NAS device can be connected to any
available network jack. NAS can be more
reliable than DAS because it separates the
storage from the server. If the server fails, there
is unlikely to be file system corruption, although
partially-created files may linger. However, if the
power source or OS of the NAS fails, corruption
is still possible.
26. Network-attached storage
Benefits
* It should be noted that NAS is effectively a server
in itself -- with all major components of a typical PC
-- a CPU, motherboard, RAM, etc -- in fact many
run an embedded Linux -- and its reliability is a
function of how well it is designed internally. A NAS
without redundant data access paths, redundant
controllers, redundant power supplies, is probably
less reliable than DAS connected to a server which
does have redundancy for its major components.
That is to say, the NAS itself becomes a single point
of failure
27. Network-attached storage
Drawbacks
Due to the multiprotocol, and the reduced CPU and
OS layer, the NAS has its limitations compared to the
DAS/FC systems. If the NAS is occupied with too
many users or too many I/O or CPU processing power
that is too demanding, the NAS reaches its limitations.
A server system is easily upgraded by adding one or
more servers into a cluster, so CPU power can be
upgraded, while the NAS is limited to its own
hardware, which is in most cases not upgradable.
The key difference between DAS and NAS is the
reduced CPU and I/O power offered by the latter.
28. Network-attached storage
NAS uses
NAS is useful for more than just general centralized storage
provided to client computers in environments with large
amounts of data. NAS can enable simpler and lower cost
systems such as load-balancing and fault-tolerant email and
web server systems by providing storage services. The
potential emerging market for NAS is the consumer market
where there is a large amount of multi-media data. Such
consumer market appliances are now commonly available.
Unlike their rackmounted counterparts, they are generally
packaged in smaller form factors. The price of NAS
appliances has plummeted in recent years, offering flexible
network based storage to the home consumer market for
little more than the cost of a regular USB or FireWire
external hard disk.
29. Network-attached storage
SAN Vs NAS
Often seen as competing Technologies SAN & NAS
Actually complement each other very well to provide
access to different types of data SANs are optimized
for high volume block oriented data transfers while
NAS is designed to provide data access at the file
level
Both technologies satisfy the need to remove direct
storage to server connections to facilitate more
flexible storage access
In addition both are based on open industry standard
protocols
30. Capacity Planning issues on Ongoing basis
The planning is done at the beginning of the project
which includes:
•30% extra provision to accommodate growth
•Server sizing and planning need to be done by
keeping redundancy for critical servers in mind
•CPU processing power
•How much power is required for processing data
•Are we utilizing full CPU power
CPU utility is a physical entity which need to be
monitored during change over
31. Maintenance
(Corrective/Preventive)
Corrective Maintenance
Corrective maintenance can be classified into two
categories
Software calls
Hardware calls
Majority of calls fall in to the first category.
Software calls related to operating system ,
Applications,or database connectivity,which can be
solved using net meeting or remote Desktop
management software
32. Network Monitoring
The term network monitoring describes the use of
a system that constantly monitors a computer
network for slow or failing systems and that notifies
the network administrator in case of outages via
email, pager or other alarms. It is a subset of the
functions involved in network management.
While an intrusion detection system monitors a
network for threats from the outside, a network
monitoring system monitors the network for
problems due to overloaded and/or crashed servers,
network connections or other devices. Cont…
33. Network Monitoring
For example, to determine the status of a
webserver, monitoring software may periodically
send an HTTP request to fetch a page; for email
servers, a test message might be sent through SMTP
and retrieved by IMAP or POP3.
Commonly measured metrics are response time and
availability (or uptime), although both consistency
and reliability metrics are starting to gain
popularity.Status request failures, such as when a
connection cannot be established, it times-out, or
the document or message cannot be retrieved,
Cont….
34. Network Administrator
Network administrators are basically the network
equivalent of system administrators: they maintain
the hardware and software that comprises the
network.This normally includes the deployment,
configuration, maintenance and monitoring of
active network gear: switches, routers, firewalls,
etc. Network administration commonly includes
activities such as network address assignment,
assignment of routing protocols and routing table
configuration as well as configuration of
authentication and authorization – directory
services.
35. Network Administrator
It often includes maintenance of network facilities
in individual machines, such as drivers and settings
of personal computers as well as printers and such.
It sometimes also includes maintenance of certain
network servers: file servers, VPN gateways,
intrusion detection systems, etc.
Network specialists and analysts concentrate on the
network design and security, particularly
troubleshooting and/or debugging network-related
problems.
36. Network Administrator
Their work can also include the maintenance of
the network's
authorization infrastructure, as well as network
backup systems.
They also perform network management
functions including:
•provide support services
•ensure that the network is used efficiently, and
•ensure prescribed service-quality objectives are
met.
37. Network management
Network management refers to the
maintenance and administration of large-scale
computer networks and telecommunications
networks at the top level.
Network management is the execution of the set of
functions required for controlling, planning,
allocating, deploying, coordinating, and monitoring
the resources of a network, including performing
functions such as initial network planning,
frequency allocation, predetermined traffic routing
to support load balancing, Cont……
38. Network management
Cryptographic key distribution authorization,
configuration management, fault management,
security management, performance management,
bandwidth management, and accounting
management .A large number of protocols exist
to support network and network device
management. Common protocols include
SNMP, CMIP, WBEM, Common Information
Model, Transaction Language 1, Java
Management Extensions - JMX, and netconf.
Cont……
39. Network management
Data for network management is collected
through several mechanisms, including agents
installed on infrastructure, synthetic monitoring
that simulates transactions, logs of activity,
sniffers and real user monitoring.
Note: Network management does not include
user terminal equipment
40. Network management
Data for network management is collected
through several mechanisms, including agents
installed on infrastructure, synthetic monitoring
that simulates transactions, logs of activity,
sniffers and real user monitoring.
Note: Network management does not include
user terminal equipment
41. Duties of System administrators
4.1 System administrators are responsible for the
security of information stored on these resources.
4.2 Administrators must take appropriate and
reasonable steps to inhibit attempts to obtain
unauthorized copies of computer software, computer
data and/or software manuals.
4.3 Administrators must take appropriate and
reasonable steps to make sure that the number of
simultaneous users of software does not exceed the
number of original copies purchased.
42. Duties of System administrators
4.4 Administrators should take steps to insure that
assigned passwords are non-trivial and users should be
given guidelines for choosing strong passwords.
4.5 Administrators must take appropriate and reasonable
steps to assure that access to the computer operations
areas is restricted to those responsible for operation and
maintenance.
4.6 Default passwords shipped with servers, operating
systems software or applications must always be changed
when the hardware or application is installed or
implemented.
43. Duties of System administrators
47 Special access to information or other special
computing privileges are to be used only in performance
of official duties.
4.8 Gaining unauthorized access to a system (or area of
a system) using knowledge of access abilities gained
during a previous position at the institution is prohibited.
4.9 System administrators should never give access to
any user on a system they do not administer.
44. Duties of System administrators
4.10 Computer installations will have defined
procedures for maintaining data integrity during
hardware repair, and will set up a schedule of
preventive maintenance for the computer
systems where appropriate.
4.11 System administrators should install fixes to
known system problems as expeditiously as
possible.
4.12 Sessions with root or other privileged
access must be logged off to a point that
requires a new log-on whenever leaving your
work area
45. Network Securities
What is a network?
Network security consists of the provisions
made in an underlying computer network
infrastructure, policies adopted by the network
administrator to protect the network and the
network-accessible resources from
unauthorized access and the effectiveness (or
lack) of these measures combined together.
46. Network Securities
What is a network?
In order to fully understand network security, one must first
understand what exactly a network is. A network is a group
of computers that are connected. Computers can be
connected in a variety of ways. Some of these ways include
a USB port, phone line connection, Ethernet connection, or a
wireless connection. The Internet is basically a network of
networks. An Internet Service Provider (ISP) is also a
network. When a computer connects to the internet, it joins
the ISP’s network which is joined with a variety of other
networks, which are joined with even more networks, and so
on. These networks all encompass the Internet. The vast
amount of computers on the Internet, and the number of ISPs
and large networks makes
47. Network Securities
Common Network Security Breeches
Hackers often try to hack into vulnerable networks. Hackers
use a variety of different attacks to cripple a network.
Whether you have a home network or a LAN, it is important
to know how hackers will attack a network.
One common way for a hacker to wreak havoc is to achieve
access to things that ordinary users shouldn’t have access to.
In any network, administrators have the ability to make
certain parts of the network “unauthorized access.” If a
hacker is able to gain access to a protected area of the
network, he or she can possibly affect all of the computers
on the network. Some hackers attempt to break into certain
networks and release viruses that affect all of the computers
in the network. Some hackers can also view information that
they are not supposed to see.
48. Network Securities
Destructive Attacks
There are two major categories for destructive
attacks to a network. Data Diddling is the first
attack. It usually is not immediately apparent that
something is wrong with your computer when it has
been subjected to a data fiddler. Data fiddlers will
generally change numbers or files slightly, and the
damage becomes apparent much later. Once a
problem is discovered, it can be very difficult to
trust any of your previous data because the culprit
could have potentially fooled with many different
documents. Cont ….
49. Network Securities
Destructive Attacks
The second type of data destruction is outright
deletion. Some hackers will simply hack into a
computer and delete essential files. This inevitably
causes major problems for any business and can
even lead to a computer
operating systems apart and cause terrible problems
to a network or a computer.
50. The Importance of Network Security
Knowing how destructive hackers can be shows you the
importance of Network Security. Most networks have
firewalls enabled that block hackers and viruses. Having
anti-virus software on all computers in a network is a must.
In a network, all of the computers are connected, so that if
one computer gets a virus, all of the other computers can be
adversely affected by this same virus. Any network
administrator should have all of the essential files on back
up disks. If a file is deleted by a hacker, but you have it on
back up, then there is no issue. When files are lost forever,
major problems ensue. Network security is an important
thing for a business, or a home. Hackers try to make
people’s lives difficult, but if you are ready for them, your
network will be safe.
51. Network Securities
How different is it from computer security?
In plain words...
Securing any network infrastructure is like securing
possible entry points of attacks on a country by
deploying appropriate defense. Computer security is
more like providing means of self-defense to each
individual citizen of the country. The former is
better and practical to protect the civilians from
getting exposed to the attacks.
52. Network Securities
The preventive measures attempt to secure the
access to individual computers--the network itself--
thereby protecting the computers and other shared
resources such as printers, network-attached storage
connected by the network.
Attacks could be stopped at their entry points
before they spread. As opposed to this, in
computer security the measures taken are
focused on securing individual computer hosts.
53. Network Securities
A computer host whose security is
compromised is likely to infect other hosts
connected to a potentially unsecured network.
A computer host's security is vulnerable to
users with higher access privileges to those
hosts.
Network security starts from authenticating any
user. Once authenticated, firewall enforces access
policies such as what services are allowed to be
accessed by the network users.
54. Network Securities
Honeypots, essentially decoy network-accessible
resources, could be deployed in a network as
surveillance and early-warning tools. Techniques
used by the attackers that attempt to compromise
these decoy resources are studied during and after
an attack to keep an eye on new exploitation
techniques. Such analysis could be used to further
tighten security of the actual network being
protected by the honeypot
55. Network Securities
Highly experienced security team provides the
following services:
Destructive Code Scanning & Content Filtering
Development of Security Policies & Procedures
Firewall Implementation
Hardware Inventory & Software Compliance
Incident Response & Investigation
Intrusion Detection Systems
Network Security analysis,Penetration Testing ,
Physical Security & Access Control ,Secure
Architecture design,Secure E-mail Services ,Virus
Detection & Mitigation Planning ,Virtual Private
Networks
56. Network Securities
Log classification
Server Operating System logs
Email records
Internet usage
Remote access
Database transaction
Firewall logs
Intrusion detection Software logs
Software security monitoring/violation
57. Asset Management
AM means different things to different people. For some, it's
just an inventory on an Excel spreadsheet. And for the
enlightened, it's an all out monitoring and management
process using comprehensive AM policies and sophisticated
AM tools. While inventories are necessary, doing this alone
is not the solution.
Considering these factors, AM can be viewed as a set of
well-defined practices and processes governing the
acquisition, maintenance, and implementation of IT services.
In reality, AM has gone beyond an 'inventory' or 'financial'
type of definition.
58. Asset Management
Although these aspects are crucial, a fresh approach
includes factors like asset lifecycles, asset
utilization monitoring and optimization. The focus
is more on the 'usefulness' side of assets than just
'stocktaking'.
"Each asset has a lifecycle and it involves
managing assets while keeping in mind many
aspects right from physical security to whether they
are serving their purpose to the end user.
59. Asset Management
If you do not track your financial assets, you cannot
manage them. The same philosophy is applicable to
managing enterprise-wide IT assets. A look at the
issues related to this fast evolving discipline.
There are two ways to keep track of an
organization's IT assets—you either do it the right
way or the wrong way. If managed in the correct
manner, even the most minimal of assets can go a
long way. When managed wrong, it is the fastest
way to put an end to the enterprise's IT
infrastructure efficiency.
60. Asset Management
There is no 'middle path' in this aspect of
Infrastructure Management (IM), which is where
many CTO/CIOs make the mistake. It is interesting
at this point to observe that most Indian
organizations have rudimentary Asset Management
(AM) policies incorporated in their IM policies. But
when the question is that of optimal monitoring and
management of assets for business benefits, the
field is still new to Indian enterprises.
61. Asset Management
So how does an organization manage its IT assets in the
most optimal manner?
Let's understand this through a hypothetical example.
Consider a typical server farm. Are the servers being
underutilized? If they are, then it would be cheaper to
consolidate servers. Or consider a scenario where the
organization does not keep track of the devices connected to
its network. It is very easy for unaccounted resources to
become non-functional without anyone being aware of it. It
could have great implications on network availability. This
is where monitoring comes into place, when discussing asset
management.
Asset Lifecycle Management (ALM) also plays a major part
in effective AM.
62. Benefits of Asset Management
The benefits associated with AM are direct and indirect. The
biggest advantage is that it helps an enterprise keep track
and utilize all its assets optimally. This is of great benefit in
tracking TCO and ROI.
"With proper AM, it is possible to keep track of the capital
expenditure and also to arrive at the ROI, which the asset
has given over a period of time," said P. Rangarajan, Asst.
VP-Operations & Systems, Birla Sun Life.
AM also helps to tweak an enterprise's infrastructure for
optimal results. "The biggest advantages of using AM in an
enterprise is you can fine-tune and utilize the existing
resources in an intelligent manner," said S.B. Patankar,
Director-Information Systems, The Stock Exchange.
63. Benefits of Asset Management
It can be clearly seen that knowing the exact number of
computers that are actually being used from the entire
inventory helps when doing the next procurement. This also
provides direct financial benefits by avoiding loss.
"If you do not know the exact number of the equipment that
you have, there is a financial loss associated with it," said
Sharma.
The importance of AM when negotiating with vendors is
very critical. CTO/CIOs have to deal with vendors regarding
AMCs and service contracts every year. Having an up-to-
date inventory of the equipment coming under warranty is
very handy during such negotiations—especially in
organizations having distributed infrastructure.
64. Benefits of Asset Management
This is true not just in the case of hardware but also
for software. The box story, 'Managing software
assets' details the issues related to software assets
and how to manage them optimally.
AM can also help the organization provide
resources to users according to their requirement.
For example, the requirement of data-entry
personnel in the Logistics department will be
different from that of the Accounts team.
65. A policy of Asset Management
A majority of the Indian corporate believes in AM built
into the infrastructure policy. While this is not bad
practice, the risk of AM losing its core focus cannot be
ruled out.
It is in this perspective that a company requires an AM
policy distinct from an IM policy. This is absolutely
essential since AM requires involvement from the entire
organization than just IT. Most departments have their
own requirements when it comes to required assets.
The same AM strategy might not work throughout the
entire organization due to this. While the basics can be
common, different AM strategies tailor-made for each
department in an organization might be required.
66. Creating Asset Management strategy
The essential objective of an AM policy should be
to maximize value of an asset over its entire
lifecycle. Even if the AM policy is integrated with
the IM policy, it has to be clearly outlined and
aligned with business.
Now comes the issue of formulating an AM
strategy. When planning for future assets it is
essential to bring in future business growth and
associated requirements, which can be provided
only by the business. A company needs to consider
factors like TCO and distribution of assets.
67. Creating Asset Management strategy
"TCO, anticipated future technology trends, physical and
data related security, BCP related issues, and compliance are
the factors to consider when planning assets,"
In such cases, it is essential that clauses to ensure periodical
surveys and policy enforcement are included. This will
ensure proper enforcement of AM practices. The policy
should also specify how assets have to be disposed off once
the asset's lifecycle is over. This will detail if the assets have
to be returned (for leased assets) or if they have to be sold
off.
68. Creating Asset Management strategy
One of the first steps to proper AM is to have an inventory
in place. The inventory should have details of IT assets
across the enterprise.
The inventory should have information about the assets—
right from time of procurement/implementation, to changes
done at the end of its lifecycle.
"The asset has to be numbered and all the details of the asset
like purchase order number, installation date, warranty
period, and expiry of warranty have to be maintained. This
will enable tracking and monitoring of the assets properly,"
While it's easy to keep track of devices with IP addresses, it
is difficult to track other types of assets. This is where AM
tools can help out enterprises.
69. Digital Asset Management
The new age enterprise makes use of AM tools to keep track
of its assets. These tools greatly simplify the complexities
involved in tracking enterprise-wide assets.
"It is necessary to utilize the assets you have in the best
possible manner, as well as manage them. Both these are
possible only with the available tools, that can do these
functions," AM tools can automatically detect device
information across the network, and display it in different
ways, like graphical and tabular formats. Inventory of
hardware and software assets are facilitated with such
features.
70. Digital Asset Management
Digital asset management means that you can study existing
projects and reuse valuable information(asset) from them
Many of the AM tools available today are add-on modules to
IM tools. While the costs of these tools tend to be on the
higher side, the benefits justify the costs involved in most
cases.
The importance of digital asset management can be gauged
from the fact that it not involves storing data in easily
understandable formats,but also that the management
software,apart from storing and classifying data,come with
additional features of analyzing it ,thereby ensuring
conductive business decision taken at right time.
71. Bandwidth /Telecom Management
Bandwidth Management is related to internet
bandwidth available and their distribution to
various entities within the domain. Distribution
of single pipe to various user departments and
setting up priorities has become possible now
days.
Bandwidth is the amount of information that can
flow through a network.
Consider traffic flow,Express highway traffic is
faster than any highway& larger number of cars
can be driven Similarly high volume applications
Consume large amount of bandwidth
72. Bandwidth Management
Bandwidth management was developed as a
technique designed to manage the resource
consumption or priority of various applications
consuming bandwidth on the network
The most common technique used to implement
bandwidth management is based upon a technology
called QoS(Quality of Service)
The QoS identifies the application Traffic passing
through the network , and the applies policies
designing to protect,prioritize,or restrict bandwidth
consumed by them
73. Incident Management
The incident management process aims to ensure that
incidents are detected and service requests are then recorded.
Recording ensures that there are no lost incidents or service
requests, allows the records to be tracked, and provides
information to aid problem management and planning
activities. The process includes the use of technology to
provide self-service facilities to customers, providing them
with flexible and convenient interfaces to the support
function while also reducing the workload and personnel
requirements of the service desk.
Service requests, such as a request for change (RFC) or a
batch job request, are also recorded and then handled
according to the relevant processes for that type of service
request
74. Incident Management
Incidents undergo classification to ensure that they are
correctly prioritized and routed to the correct support
resources. Incident management includes initial support
processes that allow new incidents to be checked against
known errors and problems so that any previously identified
workarounds can be quickly located.
Incident management then provides a structure by which
incidents can be investigated, diagnosed, resolved, and then
closed. The process ensures that the incidents are owned,
tracked, and monitored throughout their life cycle.
There may be occasions when major incidents occur that
require a response above and beyond that provided by the
normal incident process.
75. Incident Management
Incident management includes a process for handling these
major incidents, including management and functional
escalations, effective communications, and formal rollback
plans.
The objectives of incident management are:
•To restore normal service as quickly as possible.
•To minimize the impact of incidents on the business.
•To ensure that incidents and service requests are processed
consistently and that none are lost.
•To direct support resources where most required.
•To provide information that allows support processes to be
optimized, the number of incidents to be reduced, and
management planning to be carried out.
76. Incident Management
Incident management handles all detected incidents
and all service requests that can be raised through
the service desk.
ITIL defines an incident as: Any event that is not
part of the standard operation of a service that
causes, or may cause, an interruption to, or a
reduction in, the quality of service.
Typical incidents could include:
•A service being unavailable
•Software corruption
•A hardware failure
•The detection of a virus
77. Incident Management
The range of different service requests received by
the IT organization varies between different
organizations. Common service requests can
include:
•Requests for change (RFCs)
•Requests for information (RFIs)
•Procurement requests
•Batch job requests for a specific purpose
•Service extension requests
•Password resets
78. Incident Management
Ownership, Tracking, and Monitoring
The diagram below shows the incident life cycle from
the initial occurrence through to closure of the
incident following confirmation that the issue has
been resolved.
79. Help Desk
A help desk is an information and assistance
resource that troubleshoots problems with
computers and similar products. Corporations often
provide help desk support to their customers via a
toll-free number, website and/or e-mail. There are
also in-house help desks geared toward providing
the same kind of help for employees only.
In the Information Technology Infrastructure
Library, within companies adhering to ISO/IEC
20000 or seeking to implement IT Service
Management best practice, Cont….
80. Help Desk
A Help Desk may offer a wider range of user
centric services and be part of a larger Service
Desk.
A typical help desk has several functions. It provides the
users a central point to receive help on various computer
issues. The help desk typically manages its requests via
help desk software, such as an incident tracking system,
that allows them to track user requests with a unique ticket
number. This can also be called a "Local Bug Tracker" or
LBT. The help desk software can often be an extremely
beneficial tool when used to find, analyze, and eliminate
common problems in an organization's computing
environment.
81. Help Desk
The user notifies the help desk of his or her
issue, and the help desk issues a ticket that
has details of the problem. If the first level is
able to solve the issue, the ticket is closed and
updated with documentation of the solution to
allow other help desk technicians to reference.
If the issue needs to be escalated, it will be
dispatched to a second level.
There are many software applications available
to support the help desk function. Some are
targeting enterprise level help desk (rather
large) some are targeting departmental needs.
See Comparison of issue tracking systems.
82. Desk Top Management
Desktop management is a comprehensive approach
to managing all the computers within an
organization. Despite its name, desktop
management includes overseeing laptops and other
computing devices as well as desktop computers.
Desktop management is a component of systems
management, which is the administration of all
components of an organization's information
systems. Other components of systems
management include network management and
database management.
83. Desk Top Management
Traditional desktop management tasks include installing
and maintaining hardware and software, Spam filterin, and
administering user permissions. In recent years, however,
security-related tasks have become an increasingly large
part of desktop management. As a result, an increasingly
large proportion of administrative resources have been
devoted to security-related tasks, such as patch
management, fighting viruses and Spyware, and controlling
greynet applications (programs installed without corporate
approval, such as instant messaging, file sharing programs,
and RSS readers).
84. Remote Desk Top Management
One of the many challenges facing Microsoft
administrators is how to manage remote
systems in a secure manner? In the world of
the UNIX the answer is quite simple: using the
SSH protocol is sufficient. Thanks to the SSH,
we can manage remote systems not only in
the text mode, but we can also run remote X-
Window applications by using the protocol
tunneling technique. And all of that by using
strong cryptography, which protects
transmitted data from unauthorized access.
85. Remote Desk Top Management
Unfortunately, providing secure remote access to
the MS Windows systems is not as easy. Why?
First of all, only the NT Terminal Server, 2000
Server and XP are equipped with remote
management services (Terminal Services).
Secondly, the solutions that offer remote MS
Windows management possibilities either don't
encrypt transmitted data (like VNC) or their
implementation often comes hand in hand with the
additional, significant costs.
86. Remote Desk Top Management
What features should a remote management
solution have? First of all, the solution must be
functional. Although in the case of Unix
systems, access to the emulated text terminal
is often sufficient, the use of such methods to
manage MS Windows is far from ideal. Because
the MS Windows is a system based on a
graphics environment, remote management
should be also realized in a graphics mode.
Besides being functional, remote management
must also be secure. The solution must not
only provide user authentication, but must also
assure confidentiality and integrity of the
transmitted data.
87. Remote Desk Top Management
In the remote management solution that will be presented in
this discussion, all the above requirements will be met by
using the following open-source software:
•VNC - VNC (Virtual Network Computing) provides
graphics management of remote systems. In our case, the
VNC software will be the "core" of the whole solution. It
will provide a graphics console to the remote MS Windows
system.
•Stunnel - The main purpose of the Stunnel utility is to
create SSL tunnels that can be used to transmit other, often
non-encrypted protocols in a secure manner. In the described
solution, this tool will be used to secure the VNC protocol.
88. Remote Desk Top Management
•Thanks to the Stunnel, it will be possible to assure not only
confidentiality and integrity of the transmitted data, but also
to authenticate VNC clients and servers by certificates.
OpenSSL - OpenSSL is a library of cryptographic functions
that can be used to enrich applications by data encrypting
functions. By using OpenSSL we can also generate, sign and
revoke certificates that can be used in solutions based on a
public key infrastructure (PKI). In the method presented
below this tool will be used to generate and sign certificates
needed to authenticate both VNC clients and servers.
89. Remote Desk Top Management
The following picture shows the way the software
mentioned above will be used to provide secure
management of remote desktops:
90. Remote Desk Top Management
Major features of Desktop Management
•Speed improvement
•Real Full screen
•Improvement to file Transfer schedule
•More audit info gathered from remote systems
•Email connection attempts
•Error logging improvement
•Ping time outs added for connection attempts
•Connect remote system improvement
91. Network inventory
System Admin/help desk engineers can access
user desktop& solve any software issues
but it is hard to manage hardware& software
details such as memory
Network inventory is powerful tool for for
software & hardware inventory & audit
Remote desktop professional has a built in
Performance monitor
Remote desktop professional is a remote desktop
management tool
You can gather information on remote machines
on your network
92. “Like” us on Facebook:
p // /
http://www.facebook.com/welearnindia
“Follow” us on Twitter:
http://twitter.com/WeLearnIndia
http://twitter com/WeLearnIndia
Watch informative videos on Youtube:
http://www.youtube.com/WelingkarDLP