Clustering: Ontology and the Dublin Core

 Running head: CLUSTERING
Clustering: Ontology and the Dublin Core
Customer’s Name
Academic Institution
CLUSTERING 2
Clustering: Ontology and the Dublin Core
Introduction
Have you ever wondered how some organizations are able to perform parallel automated
operations from different user groups or client locations while at the time ensuring that there are
fast, accurate, and highly available information and services? Think of companies such as
Google; how does it manage and synchronize all the user needs and focus on all clients at the
same time? These issues bring us to clustering: Clustering has been available for at least the last
two decades. Let us take a short survey on Google to capture a bigger picture of computer
clustering:
Google has spent the past 15 years on systems software studies and research from
university laboratories, and established their own proprietary, production value and quality
cluster systems. But just what is this policy that Google has been establishing? Simply, it is a
circulated computing network which manages;
Web-scale datasets on 100,000 node-server cluster. It comprises of petabyte,
strewn fault tolerant file system, distributed RPC code, possibly network
shared memory and process migration, and a datacenter management system
which lets a handful of operating system engineers effectively run 100,000
servers.
Skrentha (2004)
The secret of Google’s power according to Skrentha (2004), is simply system clustering!
Google is a company which has established a single, incredibly huge, custom computer cluster
CLUSTERING 3
system. It runs its own computer cluster systems. The company makes its giant computer cluster
bigger and quicker each month, while reducing CPU cycles cost.
The questions that we may be asking ourselves right now are; ‘what then is this clustering
and how does it operate? What benefits does it have over a single operator system? Thankfully,
clustering is not as complicated as many people think and the next section, we are going to look
into these questions and also consider how ontology and Dublin Core are linked to clustering.
About Clustering
Using simple layman’s language, clustering is simply linking two or more computers
together, so that they can act like one computer. However, in computing, clustering refers to the
use of multiple computers, commonly the UNIX workstations or PCs, multipart storage devices
as well as redundant interconnections to create what appears as one vastly available server
system. Cluster computing is used for high availability, fault tolerance, parallel processing and
load balancing.
What does Clustering Include?
A server cluster contains a network of servers (such as the one used by Google), referred
to us nodes. Nodes usually communicate with each other to generate sets highly available
services to users. They are designed to serve functions with long-running in-memory rate or state
updated data. They comprise of print servers, file servers, messaging servers and database
servers: most of us have certainly come across these servers in the course day to day activities—
even though we did not acknowledge the concept of clustering at that time—true?
How does Clustering Work?
Since clustering is terminology broadly used, hardware configuration of system clusters
usually depend on the networking machinery chosen and the motive of the system: Three basic
CLUSTERING 4
kinds of clustering hardware are involved: mirrored disk, shared nothing, and shared disk
configurations. Using simplified terminology, let us look into each of these clusters, one at time,
beginning with the Shared Disk Clusters.
Shared disk clusters. Mitchell (2010), points out that this method of clustering utilizes
fundamental centralized I/O devices available to all network nodes (computers) within the
system cluster. They are referred to as shared-disk clusters because the I/O implicated is
normally disk storage for ordinary files and databases. Good examples of technologies using
Shared-disk cluster are OPS (Oracle Parallel Server) and the HACMP of IBM. Shared-disk
clusters depend on a universal I/O bus to access disks but they do not need a shared memory.
Since all nodes might concurrently cache data from or write to the centralized disks,
harmonization machinery has to be used to safeguard consistency of the system.
What about the Shared –Nothing- Clusters? Ordinarily, we many tend to think that this
group of clusters operates like the single server systems, because if taken literally, they share
nothing! However, as explained by Mitchell (2002), the term is used to mean that this form does
not entail parallel disk accesses from multiple computers. This implies that they do not call for
circulated lock manager and they include MSCS (Microsoft Cluster Server). MSCS clusters
utilize a common SCSI link connecting nodes that logically leads some people to consider it as
shared-disk solution. However, on 


Enjoy big discounts

Get 20% discount on your first order