Cluster computing at home. Desktop cluster. Checking network connections and name resolution

I built my first "cluster" of single-board computers almost immediately after the Orange Pi PC microcomputer began to gain popularity. It could be called a “cluster” with a big stretch, because from a formal point of view, it was just a local network of four boards that “saw” each other and could access the Internet.

The device participated in the project [email protected] and even managed to count something. But, unfortunately, no one came to pick me up from this planet.
However, I learned a lot from all the time fiddling with wires, connectors and microSD cards. So, for example, I found out that you should not trust the declared power of the power supply, that it would be nice to distribute the load in terms of consumption, and the wire cross-section matters.

And yes, we had to "collective farm" the power management system, because the simultaneous start of five single-board devices may require a starting current of the order of 8-10A (5 * 2)! This is a lot, especially for PSUs made in the basements of the country, where we so love to order all sorts of ... interesting gadgets.

Perhaps I'll start with her. The task was reduced to relatively simple actions - after a given period of time, sequentially turn on 4 channels through which 5 volts are supplied. The easiest way to implement your plan is Arduino (which every self-respecting geek has in abundance) and this is such a miracle board with Ali with 4 relays.

And you know, it even worked.

However, the “fridge-style” clicks on startup caused some rejection. Firstly, when you clicked, a power interference ran through and it was necessary to install capacitors, and secondly, the whole structure was quite large.

So one day I just replaced the relay box with transistor switches based on IRL520.

This solved the problem with interference, but since the mosfet controls "zero", we had to abandon the brass legs in the rack, so as not to accidentally connect the ground of the boards.

And now, the solution is being replicated perfectly and two clusters are already working stably without any surprises. Just as planned.

But back to replicability. Why buy PSUs for a hefty price when there are plenty of affordable ATXs literally under your feet?
Moreover, they have all the voltages (5,12,3.3), the rudiments of self-diagnostics and the ability to programmatically control.

Well, here I will not particularly grovel - an article about ATX control through Arduino.

Well, all the pills are eaten up, the stamps are also pasted? It's time to put it all together.

There will be one head node that connects to the outside world via WiFi and gives the "Internet" to the cluster. It will be powered by ATX standby voltage.

In fact, TBNG is responsible for the distribution of the Internet.
So, if desired, the cluster nodes can be hidden behind the TOR.

Also, there will be a tricky board connected via i2c to this head node. She will be able to turn on / off each of the 10 working nodes. Plus, it will be able to control three 12V fans to cool the entire system.

The scenario is as follows - when the ATX is turned on to 220v, the head node starts. When the system is ready for operation, it sequentially turns on all 10 nodes and fans.
When the turn-on process is over, the head node will bypass each working node and ask how we feel, what the temperature is. If one of the racks is heating up, we will increase the airflow.
Well, with a shutdown command, each of the nodes will be carefully extinguished and de-energized.

I drew the circuit board myself, so it looks creepy. However, a well-trained person took over the tracing and manufacturing, for which many thanks to him.

Here she is in the assembly process

Here is one of the first sketches for the location of the cluster components. Made on a checkered sheet and immortalized through Office Lens by phone.

The whole structure is placed on a sheet of PCB purchased for the occasion.

This is how the arrangement of the nodes inside looks like. Two racks of five boards.

Here you can see the control Arduino. It is connected to the head Orange Pi Pc via i2c via a level converter.

Well, here's the final (current version).

So, all that is needed is to write several utilities in Python that would conduct all this music - turn it on, turn it on, adjust the fan speed.

I won't bore you with technical details - it looks something like this:

1
2
3
4
5
6
7
8
#! / usr / bin / env sh

echo "Starting ATX board ..."
/home/zno/i2creobus/i2catx_tool.py --start
echo "Setting initial fan values ​​..."
/home/zno/i2creobus/i2creobus_tool.py --fan 0 --set 60
/home/zno/i2creobus/i2creobus_tool.py --fan 1 --set 60
/home/zno/i2creobus/i2creobus_tool.py --fan 2 --set 60

Since we already have as many as 10 nodes, we are adopting Ansible, which will help, for example, to properly shutdown all nodes. Or run a temperature monitor on each.

1
2
3
4
5
6
7
8
---

- hosts: workers
roles:
- webmon_stop
- webmon_remove
- webmon_install
- webmon_start

I am often accused in a dismissive tone, saying that this is just a local network of single-board devices (as I mentioned at the very beginning). In general, I don't give a damn about someone else's opinion, but let's add some glamor and organize a docker swarm cluster.
The task is quite simple and can be completed in less than 10 minutes. Then we launch a Portainer instance on the head node, and voila!

Now you can really scale tasks. So, in this moment the cluster runs a cryptocurrency miner Verium Reserve. And, quite successfully. Hopefully, the nearest native will pay for the electricity consumed;) Or, reduce the number of nodes involved and mine something else like Turtle Coin.

If you want a payload, you can throw Hadoop into the cluster or arrange balancing of web servers. There are a lot of ready-made images on the Internet, and there is enough training material. Well, if the image (docker image) is missing, you can always build your own.

What did this teach me? In general, the technology “stack” is very wide. Judge for yourself - Docker, Ansible, Python, Arduino pumping (God forgive me, it will not be said before nightfall), and of course the shell. As well as KiCad and work with a contractor :).

What can be done better? Much. On the software side, it would be nice to rewrite the control utilities in Go. On the railroad - make it more steampunkish - the KDPV in the beginning perfectly lifts the bar. So there is something to work on.

The roles were played by:

  • The head node is Orange Pi PC with usb wifi.
  • Work nodes - Orange Pi PC2 x 10.
  • Network - 100 Mbps [email protected]
  • Brain - Arduino clone based on Atmega8 + level converter.
  • The heart is an ATX power controller with a power supply.
  • Software (soul) - Docker, Ansible, Python 3, a little shell and a little laziness.
  • The time spent is priceless.

In the course of the experiments, a pair of Orange Pi PC2 boards suffered due to mixed power supply (they burn very nicely), another PC2 lost Ethernet (this is a separate story, in which I do not understand the physics of the process).

That seems to be the whole story "at the top". If someone finds it interesting, ask questions in the comments. And vote in the same place for questions (plus - each comment has a button for this). Most interesting questions will be highlighted in new notes.
Thanks for reading to the end.

Today, the business processes of many companies are completely tied to information
technologies. With the growing dependence of organizations on the work of computing
networks, the availability of services at any time and under any load plays a large
role. One computer can only provide First level reliability and
scalability, the maximum level can be achieved by combining into
a single system of two or more computers - a cluster.

What is a cluster for?

Clusters are used in organizations that need 24/7 and
uninterrupted availability of services and where any interruptions in work are undesirable and
are unacceptable. Or in cases where a load surge is possible, with which it can
not to cope main server, then additional
hosts that usually perform other tasks. For mail server processing
tens and hundreds of thousands of letters a day, or a web server serving
online shopping, the use of clusters is highly desirable. For the user
such a system remains completely transparent - the entire group of computers will
look like one server. Using several, even cheaper ones,
computers allows you to get very significant advantages over a single
and a nimble server. This is an even distribution of incoming requests,
increased fault tolerance, since when one element leaves its load
pick up other systems, scalability, convenient maintenance and replacement
cluster nodes; and much more. Failure of one node automatically
is detected, and the load is redistributed, for the client all this will remain
unnoticed.

Win2k3 features

Generally speaking, some clusters are designed to improve data availability,
others to ensure maximum performance... In the context of the article, us
will be interested MPP (Massive Parallel Processing)- clusters, in
which applications of the same type are executed on several computers, providing
scalability of services. There are several technologies that allow
distribute the load between several servers: traffic redirection,
translation of addresses, DNS Round Robin, use of special
programs
that work at the application level, like web accelerators. V
Win2k3, in contrast to Win2k, support for clustering is built from the beginning and
two types of clusters are supported, differing in applications and specifics
data:

1. NLB (Network Load Balancing) clusters- provide
scalability and high availability of services and applications based on TCP protocols
and UDP, combining up to 32 servers with the same data set into one cluster, on
which run the same applications. Each request is executed as
separate transaction. Used to work with sets of rarely changing
data like WWW, ISA, terminal services, and other similar services.

2. Server clusters- can combine up to eight nodes, their main
the goal is to ensure the availability of applications in the event of a failure. Consist of active and
passive nodes. The passive node is idle most of the time, playing the role
reserve of the main node. For individual applications it is possible to customize
several active servers, distributing the load between them. Both nodes
connected to a single data store. Server cluster is used for work
with large volumes of frequently changing data (mail, file and
SQL servers). Moreover, such a cluster cannot consist of nodes operating under
managing various Win2k3 options: Enterprise or Datacenter ( Web versions and
Server clusters do not support Standart).

V Microsoft Application Center 2000(and only) there was one more form
cluster - CLB (Component Load Balancing) providing an opportunity
Distributing COM + applications across multiple servers.

NLB clusters

When using load balancing on each of the hosts, a
virtual network adapter with its own independent from the real IP and MAC address.
This virtual interface presents the cluster as a single node, clients
refer to it precisely by its virtual address. All requests are received by everyone
cluster node, but only one is processed. All nodes run
Network Load Balancing Service
,
which, using a special algorithm that does not require data exchange between
nodes, decides whether a particular node needs to process the request or
no. Nodes are exchanged heartbeat messages showing them
availability. If the host stops emitting heartbeat or a new host appears,
the rest of the nodes start convergence process, re
redistributing the load. Balancing can be done in one of two ways
modes:

1) unicast- unicast, when instead of a physical MAC
the MAC of the virtual adapter of the cluster is used. In this case, the cluster nodes are not
can exchange data with each other using MAC addresses, only via IP
(or a second adapter not associated with the cluster);

Only one of these modes should be used within the same cluster.

Can be customized multiple NLB clusters on one network adapter,
specifying specific rules for ports. Such clusters are called virtual. Their
application makes it possible to set for each application, node or IP address
specific computers in the primary cluster, or block traffic for
some application without affecting traffic for other programs running
on this node. Or, conversely, an NLB component can be bound to several
network adapters, which will allow you to configure a number of independent clusters on each
node. You should also be aware that setting up server clusters and NLB on the same node
is not possible because they work differently with network devices.

The administrator can make some kind of hybrid configuration that has
the advantages of both methods, for example, creating an NLB cluster and setting up replication
data between nodes. But replication is not performed constantly, but from time to time,
therefore, the information on different nodes will differ for some time.

We will finish with the theory, although you can talk about building clusters
for a long time, listing the possibilities and ways of building up, giving various
recommendations and options for specific implementation. We will leave all these subtleties and nuances
for self-study and move on to the practical part.

Setting up an NLB cluster

For organizing NLB clusters no additional software required, all
produced by the available means Win2k3. To create, maintain and monitor
NLB clusters use component Network Load Balancing Manager
(Network Load Balancing Manager)
which is in the tab
"Administration" "Control panels" (NLBMgr command). Since the component
"Network load balancing" is set as standard network driver Windows,
NLB installation can also be performed using the component " Network connections", v
where the corresponding item is available. But it is better to use only the first
option, simultaneous use of the NLB manager and "Network Connections"
can lead to unpredictable results.

The NLB dispatcher allows you to configure and manage work from one place at once
multiple clusters and nodes.

It is also possible to install an NLB cluster on a computer with one network
adapter associated with Network Load Balancing, but this
case in unicast mode, the NLB manager on this computer cannot be
used to control other nodes, and the nodes themselves cannot exchange
with each other information.

Now we call the NLB dispatcher. We have no clusters yet, so the
the window does not contain any information. Select the "New" item in the "Cluster" menu and
start filling in the fields in the "Cluster parameters" window. In the field "Settings
Cluster IP parameters "enter the value of the virtual IP address of the cluster, the mask
subnet and full name. The value of the virtual MAC address is set
automatically. Below we select the cluster operation mode: unicast or
multicast. Pay attention to the checkbox "Allow remote control" - in
of all Microsoft documents strongly recommends not using it during
avoidance of security problems. Instead, one should apply
dispatcher or other means remote control e.g. toolbox
Windows Management (WMI). If the decision to use it is made, you should
take all appropriate measures to protect the network, covering additionally
firewall UDP ports 1717 and 2504.

After filling in all the fields, click "Next". In the "Cluster IP Addresses" window, when
if necessary, add additional virtual IP addresses that will
used by this cluster. In the next window "Rules for ports" you can
set load balancing for one or for a group of ports of all or
selected IP using UDP or TCP protocols, as well as block access to the cluster
specific ports (which the firewall does not replace). Default cluster
handles requests for all ports (0-65365); it is better to limit this list,
introducing into it only those that are really necessary. Although, if there is no desire to mess around,
you can leave everything as it is. By the way, in Win2k, by default, all traffic,
directed to the cluster, processed only the node with the highest priority,
the rest of the nodes were connected only when the main one failed.

For example, for IIS, you only need to enable ports 80 (http) and 443 (https).
Moreover, you can make sure that, for example, protected connections are processed
only specific servers on which the certificate is installed. For adding
new rule, click "Add", in the dialog box that appears, enter
Host IP address, or if the rule applies to everyone, then leave the checkbox
"Everything". In the "From" and "To" fields of the port range, set the same value -
80. The key field is Filtering Mode - here
specifies who will process this request. Three fields are available to define the mode
filtering options: "Multiple hosts", "Single host" and "Disable this port range".
Selecting "One Host" means that traffic directed to the selected IP (computer
or cluster) with the specified port number will be processed by the active node,
having the lowest priority indicator (more about it below). Selecting "Disable ..."
means that such traffic will be dropped by all cluster members.

In the "Multiple nodes" filtering mode, you can additionally specify the option
identifying customer affinity to direct traffic from a given customer to
the same cluster node. There are three options: None, One, or Class
C ". Choosing the first means that any request will be answered by an arbitrary
node. But you should not use it if UDP is selected in the rule or
"Both". When choosing the remaining points, the similarity of customers will be determined by
specific IP or class C network range.

So, for our rule with port 80, let's opt for the option
"Multiple Nodes - Class C". The rule for 443 is filled in the same way, but we use
"Single node" so that the client always answers the primary node with the smallest
priority. If the dispatcher finds an incompatible rule, it will print
a warning message, additionally will be logged in the Windows event log
the corresponding entry.

Next, we connect to the node of the future cluster by entering its name or real IP, and
define the interface that will be connected to the cluster network. In the Options window
node "select the priority from the list, specify the network settings, set the initial
node state (running, stopped, suspended). Priority at the same time
is an unique identifier node; the lower the number, the higher the priority.
Priority 1 node is the master server that primarily receives
packages and acting as a routing manager.

The checkbox "Save state after computer restart" allows in case of
failure or reboot of this node will automatically bring it back online. After clicking
to "Finish" in the Dispatcher window, a record will appear about the new cluster, in which
there is one node.
The next node is also easy to add. Choose from the menu "Add node" or
"Connect to existing", depending on which computer
connection is in progress (it is already a member of the cluster or not). Then in the window
we indicate the name or address of the computer, if there are enough rights to connect, a new one
the node will be connected to the cluster. At first, the icon opposite his name will be
differ, but when the convergence process is completed, it will be the same as for
the first computer.

Since the dispatcher displays the properties of the nodes at the time of its connection, for
to clarify the current state, a cluster should be selected and in context menu paragraph
"Refresh". The manager will connect to the cluster and show the updated data.

After installation NLB cluster remember to change your DNS record to
name resolution now showed on IP cluster.

Changing server load

In this configuration, all servers will be loaded evenly (except
option "One node"). In some cases, it is necessary to redistribute the load,
assigning most of the work to one of the nodes (for example, the most powerful).
Once created, the rules for a cluster can be modified by selecting in
the context menu that appears when you click on the name, the "Cluster properties" item.
All the settings that we talked about above are available here. Menu item
The Node Properties gives you a few more options. In the "Node parameters"
you can change the priority value for a specific node. In the "Rules
for ports ", you cannot add or remove a rule, it is available only at the
cluster. But by choosing to edit a specific rule, we get the opportunity
adjust some settings. So, for established regime filtration
"Multiple nodes" item "Load estimation" becomes available, allowing
redistribute the load on a specific node. By default, the checkbox is checked.
"Equal", but in the "Assessment of load" you can specify a different value of the load on
specific node, as a percentage of total load cluster. If the mode is activated
filtering "One node", a new parameter "Priority
processing ". Using it, you can make traffic to a specific port
will be primarily processed by one cluster node, and to the other by others
knot.

Event logging

As mentioned, the Network Load Balancing component records all
cluster actions and changes in the Windows event log. To see them
select "View Events - System", WLBS messages belong to NLB (from
Windows Load Balancing Service, as NT called it). Besides, in
the dispatcher window displays the latest messages containing information about errors
and any configuration changes. By default, this information is not
persists. To write it to a file, select "Options ->
Log options ", select the" Enable logging "checkbox and specify a name
file. A new file will be created in a subdirectory of your account in Documents
and Settings.

Configuring IIS with Replication

A cluster is a cluster, but without a service it does not make sense. So let's add IIS (Internet
Information Services)
... IIS server is part of Win2k3, but to boil down to
to minimize the possibility of attacks on the server, it is not installed by default.

There are two ways to install IIS: through the "Control Panel" or
the Role Management Wizard for this server. Let's consider the first one. Go to
Control Panel - Add or Remove Programs
Remove Programs), select "Install Windows components"(Add / Remove Windows
Components). Now go to the "Application Server" item and mark in the "Services
IIS "is all you need. By default, the server's working directory is \ Inetpub \ wwwroot.
Once installed, IIS can output static documents.

First of all, decide what components and resources are required. You will need one master node, at least a dozen identical compute nodes, an Ethernet switch, a power distribution unit, and a rack. Determine the wiring and cooling capacity and the amount of space you need. Also decide what IP addresses you want to use for nodes, what software you will install and what technologies will be required to create parallel computing power (more on this below).

  • Although the hardware is expensive, all of the programs in this article are free, and most of them are open source.
  • If you want to know how fast your supercomputer can theoretically be, use this tool:

Mount the nodes. You will need to collect the hosts or purchase pre-assembled servers.

  • Choose server frames that maximize space and energy efficiency and efficient cooling.
  • Or you can “dispose of” a dozen or so used servers, somewhat outdated - and even if their weight exceeds the total weight of the components, you will save a decent amount. All processors, network adapters, and motherboards must be the same for computers to work well together. Of course, don't forget about the RAM and hard drives for each node, as well as at least one optical drive for the main node.
  • Install the servers in a rack. Start at the bottom so that the rack is not overloaded from above. You will need the help of a friend - the assembled servers can be very heavy and it is quite difficult to put them in the cells on which they are held in the rack.

    Install the Ethernet switch next to the rack. It is worth configuring the switch right away: set the jumbo frame size to 9000 bytes, set the static IP address you chose in step 1 and disable unnecessary protocols such as SMTP.

    Install a Power Distribution Unit (PDU). Depending on what maximum load the nodes on your network are putting out, you may need 220 volts for a high-performance computer.

  • When everything is installed, proceed to configuration. Linux is in fact the main system for high-performance (HPC) clusters - not only is it ideal as an environment for scientific computing, but you also don't have to pay to install a system for hundreds or even thousands of nodes. Imagine how much it would cost to install Windows on all nodes!

    • Start by installing the latest BIOS version for motherboard and software from the manufacturer, which must be the same for all servers.
    • Set your preferred Linux distribution to all nodes, and to the main node - a distribution kit with graphical interface... Popular systems: CentOS, OpenSuse, Scientific Linux, RedHat, and SLES.
    • The author highly recommends using Rocks Cluster Distribution. In addition to installing all the software and tools needed for the cluster, Rocks provides an excellent method for quickly "migrating" multiple copies of a system to similar servers using PXE boot and Red Hat's "Kick Start" procedure.
  • Install the messaging interface, resource manager, and other required libraries. If you did not install Rocks in the previous step, you will have to manually install the required software to customize the parallel computing logic.

    • First you need portable system for working with bash, for example, Torque Resource Manager, which allows you to split and distribute tasks across multiple machines.
    • Add another Maui Cluster Scheduler to Torque to complete the installation.
    • Next, you need to set up a messaging interface, which is necessary so that individual processes in each individual node use shared data. OpenMP is the easiest option.
    • Do not forget about the multithreaded math libraries and compilers that will "build" your programs for distributed computing. Did I mention that you should just put Rocks on?
  • Connect computers to the network. The main node sends tasks for calculation to the subordinate nodes, which in turn must return the result back, as well as send messages to each other. And the faster all this happens, the better.

    • Use private Ethernet network to connect all the nodes into a cluster.
    • The main node can also act as NFS, PXE, DHCP, TFTP and NTP server when connected to Ethernet.
    • You must separate this network from the public ones to ensure that packets are not overlapped by others on the LAN.
  • Test the cluster. The last thing you should do before giving users access to computing power- test performance. The HPL (High Performance Lynpack) benchmark is a popular option for measuring compute speed in a cluster. You need to compile software from source with the highest degree of optimization that your compiler allows for the architecture you choose.

    • You must of course compile with all possible settings the optimizations that are available for the platform of your choice. For example, if you are using an AMD CPU, compile in Open64 with an optimization level of -0.
    • Compare your results with TOP500.org to compare your cluster with the 500 fastest supercomputers in the world!
  • Press center

    Creation of a cluster based on Windows 2000/2003. Step by step

    A cluster is a group of two or more servers that work together to provide uptime for a set of applications or services and are perceived as a single entity by the client. Cluster nodes are interconnected using hardware networking, shared shared resources, and server software.

    Microsoft Windows 2000/2003 supports two clustering technologies: Network Load Balancing clusters and server clusters.

    In the first case (clusters with load balancing), the Network Load Balancing service provides services and applications with the properties of a high level of reliability and scalability by combining up to 32 servers into a single cluster. Inquiries from customers in this case distributed among the cluster nodes in a transparent manner. If a node fails, the cluster automatically changes its configuration and switches the client to any of the available nodes. This cluster configuration mode is also called active-active mode, where one application is running on multiple nodes.

    The server cluster distributes its load among the servers in the cluster, with each server carrying its own load. If a node in a cluster fails, applications and services configured to run in the cluster are transparently restarted on any of the free nodes. Server clusters use shared disks to communicate within the cluster and to provide transparent access to cluster applications and services. They require special equipment, but this technology provides very high level reliability because the cluster itself does not have a single point of failure. This cluster configuration mode is also called active-passive mode. An application in a cluster runs on a single node with shared data located on external storage.

    The cluster approach to internal networking provides the following benefits:

    High Availability That is, if a service or application fails on a node in a cluster that is configured to work together in a cluster, the cluster software allows the application to be restarted on a different node. At the same time, users will experience a short-term delay in performing an operation, or they will not notice a server failure at all. Scalability For applications running in a cluster, adding servers to a cluster means increased capabilities: fault tolerance, load balancing, etc. Manageability Administrators, using a single interface, can manage applications and services, set a response to failures in a cluster node, and distribute the load among nodes cluster and remove the load from the nodes for preventive maintenance.

    In this article I will try to collect my experience in creating cluster systems based on Windows and give a small step by step guide on creating a two-node server cluster with shared data storage.

    Software Requirements

    • Microsoft Windows 2000 Advanced (Datacenter) Server or Microsoft Windows 2003 Server Enterprise Edition installed on all servers in the cluster.
    • Installed DNS service. Let me explain a little. If you are building a cluster based on two domain controllers, then it is much more convenient to use the DNS service, which you install anyway when creating Active Directory... If you are creating a cluster based on two servers that are members of a Windows NT domain, then you will have to use either WINS, or enter the correspondence between machine names and addresses in the hosts file.
    • Terminal Services for remote server management. It is not necessary, but if you have Terminal Services, it is convenient to manage servers from your workplace.

    Hardware Requirements

    • It is best to select hardware for a cluster node based on the Cluster Service Hardware Compatible List (HCL). As recommended by Microsoft Hardware must be tested for compatibility with Cluster Services.
    • Accordingly, you will need two servers with two network adapters; SCSI adapter that has an external interface for connecting an external data array.
    • An external array that has two external interfaces. Each of the cluster nodes connects to one of the interfaces.

    Comment: to create a two-node cluster, it is not at all necessary to have two absolutely identical servers. After a failure on the first server, you will have a little time to analyze and restore the operation of the main node. The second node will work for the reliability of the system as a whole. However, this does not mean that the second server will be idle. Both cluster nodes can calmly go about their business, solve different problems. But we can configure a certain critical resource to work in a cluster by increasing its (this resource) fault tolerance.

    Requirements for network settings

    • A unique NetBIOS name for the cluster.
    • Five unique static IP addresses. Two for network adapters on a cluster network, two for network adapters on a shared network, and one for a cluster.
    • Domain account for the Cluster service.
    • All cluster nodes must be either a member server in the domain or domain controllers.
    • Each server must have two network adapters. One for connecting to a public network (Public Network), the second for data exchange between cluster nodes (Private Network).

    Comment: According to Microsoft's recommendations, your server should have two network adapters, one for the shared network, the other for data exchange within the cluster. Is it possible to build a cluster on one interface - probably yes, but I have not tried it.

    Cluster installation

    When designing a cluster, you must understand that using the same physical network for both cluster exchange and local network, you increase the failure rate of the entire system. Therefore, it is highly desirable for cluster data exchange to use one subnet, allocated as a separate physical network element. And for the local network it is worth using a different subnet. Thus, you increase the reliability of the entire system as a whole.

    In the case of building a two-node cluster, one switch is used common network... The two servers in the cluster can be directly cross-linked, as shown in the figure.

    Installing a two-node cluster can be split into 5 steps

    • Installing and configuring nodes in a cluster.
    • Installing and configuring a shared resource.
    • Checking the disk configuration.
    • Configuring the first node in the cluster.
    • Configuring the second node in the cluster.

    This step-by-step guide will help you avoid mistakes during installation and save you tons of time. So, let's begin.

    Installing and configuring nodes

    We will simplify the task a little. Since all cluster nodes must be either domain members or domain controllers, we will make the 1st cluster node the root holder of the AD (Active Directory) directory, and the DNS service will also run on it. The 2nd node in the cluster will be a full domain controller.

    Installation operating system I'm willing to skip it, assuming you shouldn't have any problems with that. But I would like to explain the configuration of network devices.

    Network settings

    Before starting the installation of the cluster and Active Directory, you must complete the network settings. I would like to divide all network settings into 4 stages. For name resolution on the network, it is advisable to have a DNS server with already existing records about cluster servers.

    Each server has two network cards. One network card will serve for data exchange between cluster nodes, the second will work for clients in our network. Accordingly, we will name the first one Private Cluster Connection, the second one will be called Public Cluster Connection.

    The network adapter settings for one and the other server are identical. Accordingly, I will show you how to configure the network adapter and give a plate with the network settings of all 4 network adapters on both cluster nodes. To configure the network adapter, follow these steps:

    • My Network Places → Properties
    • Private Cluster Connection → Properties → Configure → Advanced

      This point is self-explanatory. The fact is that according to the strong recommendations of Microsoft, all network adapters of the cluster nodes should be set to the optimal speed of the adapter, as shown in the following figure.

    • Internet Protocol (TCP / IP) → Properties → Use the following IP: 192.168.30.1

      (For the second node, use 192.168.30.2). Enter the subnet mask 255.255.255.252. Use 192.168.100.1 as the DNS server address for both hosts.

    • Additionally, on the Advanced → WINS tab, select Disabled NetBIOS over TCP / IP. For the settings of network adapters of a public (Public) network, omit this item.
    • Do the same with network card for the local network Public Cluster Connection. Use the addresses shown in the table. The only difference is the configuration of the two network cards is that for Public Cluster Connection you do not need to turn off WINS mode - NetBIOS over TCP / IP.

    Use the following label to configure all network adapters on the cluster nodes:

    Knot Network name IP address MASK DNS Server
    1 Public Cluster Connection 192.168.100.1 255.255.255.0 192.168.100.1
    1 Private Cluster Connection 192.168.30.1 255.255.255.252 192.168.100.1
    2 Public Cluster Connection 192.168.100.2 255.255.255.0 192.168.100.1
    3 Private Cluster Connection 192.168.30.2 255.255.255.252 192.168.100.1

    Installing Active Directory

    Since my article is not intended to tell you about the installation of Active Directory, I will omit this point. Quite a lot of all kinds of recommendations, books have been written about this. You are taking Domain name like mycompany.ru, install Active Directory on the first node, add the second node to the domain as a domain controller. When done, check the server configurations, Active Directory.

    Installing Cluster User Account

    • Start → Programs → Administrative Tools → Active Directory Users and Computers
    • Add a new user, for example ClusterService.
    • Check the boxes for: User Cannot Change Password and Password Never Expires.
    • Also add this user to the Administrators group and give him Log on as a service rights (rights are assigned in Local Security Policy and Domain Controller Security Policy).

    Setting up an external dataset

    To set up an external array of data in a cluster, remember that before installing the Cluster Service on the nodes, you must first configure the disks on the external array, only then install the cluster service first on the first node, and only then on the second. If the installation order is out of order, you will crash and miss your goal. Whether it will be possible to fix it - probably yes. When the error appears, you will have time to adjust the settings. But Microsoft is such a mysterious thing that you don't know at all what rake you will step on. Easier to have before your eyes step by step instructions and don't forget to press the buttons. Step by step, configuring an external array looks like this:

    1. Both servers must be turned off, the external array must be turned on, connected to both servers.
    2. We turn on the first server. We get access to the disk array.
    3. We check that the external disk array was created as Basic. If this is not the case, then we transfer the disk using the Revert to Basic Disk option.
    4. We create on external drive through Computer Management → Disk Management a small section. According to Microsoft recommendations, it should be at least 50 MB. I recommend creating a 500 MB partition. or a little more. This is enough to accommodate cluster data. The partition must be formatted with NTFS.
    5. On both cluster nodes, this section will be named with one letter, for example, Q. Accordingly, when creating a section on the first server, select the item Assign the following drive letter - Q.
    6. You can mark up the rest of the disk as you wish. Of course, it is highly desirable to use file system NTFS. For example, when configuring DNS services, WINS, the primary service databases will be migrated to a shared disk (not system volume Q, and the second one you created). And for security reasons, it will be more convenient for you to use NTFS volumes.
    7. Close Disk Management and check access to the newly created partition. For example, you can create on it text file test.txt, write and delete. If everything went well, then we are done with the configuration of the external array on the first node.
    8. Now we turn off the first server. External array must be enabled. We turn on the second server and check access to the created section. We also check that the letter assigned to the first section is identical to the one we selected, that is, Q.

    This completes the configuration of the external array.

    Installing Cluster Service Software

    Configuration of the first cluster node

    Before starting the Cluster Service Software installation, all cluster nodes must be turned off, and all external arrays must be turned on. Let's move on to the configuration of the first node. External array is on, first server is on. The entire installation process takes place using the Cluster Service Configuration Wizard:


    Configuring the second cluster node

    To install and configure the second cluster node, the first node must be turned on, all network drives were included. The procedure for setting up the second node is very similar to the one I described above. However, there are minor changes. To do this, use the following instruction:

    1. In the Create or Join a Cluster dialog box, select The second or next node in the cluster and click next.
    2. Enter the cluster name that we specified earlier (in the example, it is MyCluster), and click next.
    3. After the second node connects to the cluster, the Cluster Service Configuration Wizard will automatically pick up all settings from the primary node. Use the name we created earlier to start the Cluster Service.
    4. Enter your account password and click next.
    5. In the next dialog box, click Finish to complete the installation.
    6. Cluster service will be launched on the second node.
    7. Close the Add / Remove Programs window.

    Use the same instructions to install additional cluster nodes.

    Postscript, thanks

    In order not to get confused with all the stages of cluster installation, I will give a small table, which reflects all the main stages.

    Step Node 1 Node 2 External array