What are proprietary protocols. Current trends in information security. Which VPN should you choose? What is VPN

Not enough). The copyright holder of the proprietary software retains the monopoly on its use, copying and modification, in whole or in significant respects. Usually any non-free software is called proprietary, including semi-free.

The concept under consideration is not directly related to the concept commercial software .

Etymology

FSF

The term "proprietary software»Is used by the FSF (Free Software Foundation) to identify software that is not free software from the Foundation's perspective. ...

Semi-free software

Free software, which allows almost unlimited use, distribution and modification (including distribution of modified versions) of software for non-commercial purposes, was previously called semi-free by the Free Software Foundation. Like Debian, the Free Software Foundation considered these terms unacceptable for free software, but differentiated semi-free software from proprietary software. "Proprietary software" and "semi-free software" were collectively referred to as "non-free software." Later, the FSF abandoned the concept of "semi-free software" and began using the term "proprietary software" for all non-free software.

Means of Restrictions

Prevention of use, copying or modification can be achieved by legal and / or technical means.

The technical means include releasing only machine-readable binaries, restricting access to human-readable source code (closed source), making it difficult to use self-made copies. Proprietary code is usually accessible to employees of the developer company, but more flexible terms of restriction of access may apply, in which the provision of source code is allowed to company partners, technical auditors or others in accordance with company policy.

Legal means may include trade secrets, copyrights, and patents.

Legal protection

Legal protection computer programs possible under two different legal regimes - the regime applicable to literary works and the regime applicable to patents. In the first case, the program is identified (and protected) by the text of the code, in the second - by the criteria of patentability used for inventions (that is, it is necessary to prove “innovativeness”, “originality” and “non-obviousness”, as well as the possibility of solving the existing technical problem and commercial suitability).

Legal protection of computer programs is based on the provisions of a number of international agreements and conventions. However, almost all of them to one degree or another (such as the Paris, Berne, Rome Conventions and the Washington Treaty) are incorporated in the text of the TRIPS Agreement administered by the World Trade Organization. The TRIPS Agreement provides that computer programs are protected “like literary works under the Berne Convention (1971)”. However, in practice, the second regime of protection of proprietary digital content in the form of a patent is increasingly used (Art. 27 TRIPS). For example, in the United States, the first software patent in the United States was issued in the 50s, patent No. 3,380,029 from. issued to Martin A. Goetz. However, a full-fledged legal doctrine for patenting computer programs in the United States was not formed until the 1980s as a result of a series of judicial precedents (Gottschallk v. Benson; Diamond v. Diehr.), Which developed specific criteria applicable to patenting computer programs. Until that time, one cannot speak of patenting computer programs in the United States as a debugged procedure. One of the most recent patents for a computer program was a patent issued on January 6th. No. US 9,230,358 B2, which protects the method, system and computer program for visualizing widgets. In the EU, the patenting of computer programs is based on the provisions of the EPC, which have been clarified by a number of decisions of the European Patent Office. In case No. T258 / 03 (Hitachi / Auction method) dated 21.04.2004, the Court of Appeal of the European Patent Office indicated that Art. 52 (1) and 52 (2) EPC does not prohibit patenting of computer programs, however, not every “technical solution” can be patented. According to representatives of the Patent Office, the use of “ technical means"To solve the problem is quite understandable, but it implies the need to complement the" technical solution "with" innovative ".

Typical limitations of proprietary software

There are many different business models, and proprietary software companies write their own licensing agreements based on them. The most typical limitations of proprietary software are listed below.

Restriction on commercial use

There are a huge number of software products that allow free use for non-commercial purposes for individuals, medical and educational institutions, for non-profit organizations etc., but they require payment in the case of using the software product for the purpose of making a profit. Such software is very popular and widely used, and due to its free-of-charge it has good technical support from specialists who do not have the need for additional training costs.

Restriction on distribution

This type of restriction usually accompanies large software projects, when the copyright holder demands payment for each copy of the program. Usually with this limitation are used software products targeted at a narrow "professional" market segment or software that is required by a large number of users. An example is the Adobe CS6 software package or the Windows 8 operating system.

Restriction on learning, modification, etc.

This type of restriction is used only in software packages is closed source and may prohibit or restrict any modification of the program code, disassembly and decompilation.

Proprietary "default"

For legal and technical reasons The default software is usually proprietary.

Software in compiled languages ​​is used in a form that is not intended for editing, that is, without source texts. The author may not distribute source texts out of habit, or consider them not of sufficient quality to show them decently.

Due to the variety of licenses, it can be difficult for the author to choose the best one.

Among the supporters of free software, there are different opinions about the importance of user freedoms in relation to software that runs only on a remote system (server software that did not fall under the copyleft terms of the GNU GPL, which is why the Affero GPL appeared), or as if “on the Internet ", But in fact each time it is loaded for execution on the user's computer (for example, scripts from websites, sometimes occupying hundreds of kilobytes with unreadable shortened function names), or algorithms implemented in the form of hardware (which reduces the share of common non-free ON but does not make it looser user). See GNU AGPL, open source hardware, GNUzilla (a web browser with an add-on that blocks the execution of non-trivial nonfree JavaScript programs).

Notes (edit)

  1. Proprietary Yandex dictionaries
  2. Since commercial software can be free: Some Confusing or Loaded Words and Phrases that are Worth Avoiding(English). Free Software Foundation. Retrieved December 1, 2008. Archived February 3, 2012.

The main difference between portable audio players of the "Internet generation" from their predecessors is their close integration with a personal computer.

In the past, portable players played music from common removable media and did not need to connect to anything other than headphones.

Until recently, the PC was their only source of content. At the same time, there was a need for a mechanism for pairing a portable player and a computer. The task was to transfer data in digital form from one medium (in a PC) to another (in a player) in a format accepted in the computer industry.

The content must be transferred from the computer, in which it is usually stored on a 3.5 ”hard drive, to the media of portable players: flash memory, hard drives form factor 1 "and 1.8"

Therefore, you need to use some kind of computer data transfer interface.

Any such interface can be divided into two levels. The first is physical, i.e. directly wires, connectors, microcircuits, etc. The second is software, it is, relatively speaking, a set of instructions and algorithms, according to which data is exchanged at the physical level. This software layer is often referred to as a protocol. Today we will mainly talk about him. For the user, he is the "face" of the interface, its capabilities and disadvantages determine the convenience or difficulty in using the device as a whole.

For the full operation of the protocol, two software components are required. First, it is a driver that directly interfaces a portable device and a PC at the software level. Secondly, it is software that allows the user to manage the protocol and use it for their own needs. Software, strictly speaking, is not a direct part of the protocol. But without its presence, the very existence of the protocol loses its meaning. Therefore, within the framework of this article, we will consider software as an integral part of the protocols under consideration.

Any software protocol uses drivers and software, although the implementation of these components in each case can be very different. The success of the protocol can be roughly calculated by the formula: opportunities minus inconveniences. Capabilities are understood as the range of available functions. Disadvantages usually include opacity in use, inconvenience in installation, difficulty in learning, limited compatibility.

By the standards of digital portable technology, MP3 players are already a rather elderly class of devices. They emerged at a time when the PC infrastructure was completely unprepared for the computer's role as a multimedia host. Both at the physical and software level, there were no widespread and standardized solutions in this area, they were only in development, preparing to enter the market, or existed in piece quantities. Other related classes of portable devices are in a similar position: mobile drives, digital cameras, Cell Phones, PDA. All these types of mobile technology appeared at about the same time, in the second half of the 90s. With their appearance, they caused the need to develop unified standard protocols for interfacing PCs with portable equipment.

On a physical level, everything was relatively clear. The first portable devices were forced to use COM and LPT - there were simply no other widely available interfaces at that time.

The LPT connector can still be found on most PCs.

So, it was the physical LPT interface that was used by the ancestors of MP3 players, Saehan MpMan and Diamond Rio. This period can be called "artisanal", developers had to use interfaces and protocols that were originally created for completely different tasks.

However, the new generation of portable audio did not have long to suffer in the shackles of slow and inconvenient interfaces: already in the next 1999, manufacturers presented a wide range of devices using new standard: USB, Universal Serial Bus.

For a while, there was a semblance of struggle between USB and prevalent mainly on computers. Apple Macintosh the Firewire protocol.

WarFirewire-USB in portable players: fromiPod 1G - onlyFirewire - beforeiPod 5G - onlyUSB

However, the bulk of portable audio quickly and painlessly moved to the Universal Serial Bus, closing, at least within the framework of wired solutions, the issue with the transfer of data at the physical level.

It was far from easy for software protocols. In that short period when manufacturers were forced to use COM and LPT, the software protocols were exclusively their own design. Actually, they could not be any other, because Both COM and LPT interfaces were created at one time, of course, not at all for transferring multimedia files to portable digital players. Apart from the developers themselves, there was no one to develop drivers and software shells for the latter, and there was no question of standardization at all.

But the advent of USB did not solve the problem either. The industry is primarily concerned with the creation of standard protocols for mobile drives, digital cameras. The situation with MP3-players was not at all clear: they would be banned or not, and if they still did not, then what features should the protocols have in order for the SDMI forum to give the go-ahead. In such circumstances, the development of software protocols was still on the shoulders of the device developer. This went on for at least four years, until, finally, in the players began to appear standard protocols data transmission. These years were the time of the dominance of the first type of software protocols - proprietary or closed.

Their characteristic feature was individual drivers and software for each manufacturer, and often for each new generation of players within the products of the same manufacturer.

Proprietary protocol by exampleRCA-ThomsonLyra. It uses its own driver (PDP2222.SYS), which must be installed separately on each PC to which the player is planned to be connected

This creates many inconveniences for the user. There is no question of any transparency - the user himself is forced to manually install drivers and software for his player. In this case, various difficulties may arise, for example, if the buyer mistakenly connects the player to the PC before installing the drivers.

Having confused the order of actions when connecting the player, the user risked admiring such a message, even after installing all the necessary drivers afterwards

It's also better to forget about compatibility: drivers and software worked only with this player model (at best, with several models from this manufacturer), and the player could only work with a PC with these drivers and software installed. At first, the capabilities of the software shells were very scarce and were limited exclusively to copying audio files into the player's memory. Later, a variety of ways of copying content to the player appeared: individually by tracks or by synchronizing the contents of the player's memory with the contents of the library, compiled from audio files, located on the PC. Individual shells supported only one of these methods. Over time, software acquired additional features, for example, copying any files, not only those supported by the player, which made it possible to use it as a storage device (the function was nicknamed Data Taxi). However, the fact that this operation required the installation of drivers and software on the PC seriously reduced the usefulness of this function. As a rule, there was no talk at all about high aesthetic qualities, impeccable work and good ergonomics of casings. Users then were for the most part harsh and unspoiled people: files are copied to the player - and okay.

Most manufacturers have gone through proprietary systems: iriver, Rio Audio, Creative, Cowon, Mpio, etc. Each of these manufacturers at one time had their own drivers, and their own software, some more, some less successful. In any case, having changed the player to a device from another manufacturer, the user was forced to adapt to the new shell with its logic and features. Many manufacturers continue today to complete their devices with these programs as an alternative to MSC / UMS standards or MTP solutions.

Proprietaryshell-managers: Iriver Music Manager, Cowon JetAudio, Mpio Manager, Creative Play Center

The zoo of all kinds of shells could not suit consumers (they could still put up with the existence of a heap of motley drivers). It also did not suit manufacturers, especially small ones that did not have the resources to develop high-quality software. Therefore, in 1999 - early 2000s, the use of shells from third-party manufacturers gained some popularity.

Among them is the MusicMatch Jukebox program.

InterfaceMusicMatch Jukebox

Before the advent of iTunes for Windows PC, it was she who was used here to work with the Apple iPod. She also worked with players from other manufacturers, such as RCA-Thomson.

Programs like the MusicMatch Jukebox were the first sprout of standardization. They allowed the use of players different manufacturers without installing additional software for each of them. The solution was not perfect, it was just a step from variegated protocols and shells to standardized solutions. V this case however, only the protocol management interface was standardized; installing separate drivers for each device was still necessary. The shells themselves were not part of the operating system; they had to be installed separately, from the Internet or from a companion CD. Their functionality, stability and convenience were often questioned. Not all players were supported, which forced manufacturers to complete their devices with plugins for popular manager programs.

Available for download on the websiteRCA-Lyra: Primarily a plugin forMusicMatchJukebox and only then own shellLyraDj

Usually, over time, the program "got fat", overgrown with unnecessary functions for the user, advertising, and required more and more resources.

Another popular shell in the past isRealJukebox

At the same time, the pressure of competitors grew: in 2001 Windows Media Player was included in the standard delivery of Windows XP, in 2003 iTunes for Windows appeared. Small Asian companies in 2002-2003 also found a good replacement for this software - the open protocol MSC / UMS. As a result, shells like the MusicMatch Jukebox have left the scene to give way to next-generation protocols. But their legacy was not in vain: the model "one shell for different players" is largely inherited Microsoft system PlaysForSure.

One feature of proprietary systems, however, has allowed them to live longer in certain markets. Their closed nature created obstacles when using the player to freely transfer data from PC to PC, that is, according to the RIAA, when using it as a weapon of digital piracy. In troubled markets like America's, companies reluctant to draw too much attention to themselves continued to take a proprietary approach even after the proliferation of universal solutions.

On the siteiriver non-proprietary firmware is still marked asUMS orMTP

Here you can remember, for example, iriver or Creative. In general, iriver players were released in two versions: for the US market - working through a proprietary protocol, for others - through an open MSC / UMS. This "life after death" for closed protocols continued until the release in 2004 of the MTP protocol, which, while being relatively universal, suited record companies as well.

Period 2002-2004 was a transition from the "dark ages" of closed systems to the relative openness of modern protocols. Today, pure proprietary protocols are completely out of use.

First of all, we will define the areas of application of data transmission channels in the electric power industry and the tasks that are solved with their help. At present, the main areas of application of data transmission systems include relay protection and automation systems (RPA), dispatching and automated technological control of electric power facilities (ASTU), as well as automated energy metering systems. Within the framework of these systems, the following tasks are solved:

ASTU systems

  1. Data transfer between local telemechanics devices (TM), relay protection and automation devices and a central transceiver station (CTSP).
  2. Data transfer between the object and the dispatch center.
  3. Data transfer between dispatch centers.

Accounting systems

  1. Data transfer from metering devices to data collection and transmission devices (USPD).
  2. Data transfer from the USPD to the server.

As regards relay protection systems, the following can be noted: despite the fact that data collection from relay protection and automation devices in the automated control system in digital format began to be introduced since the advent of digital relay protection and automation devices, connections between devices are still organized by analog circuits.

In relay protection and automation systems, information transmission can perform the following functions:

  1. Transfer of discrete signals.
  2. Data transfer between relay protection and automation devices and CPPS.

Another important transmission channel, common both for relay protection and automation systems and for automated control systems and metering systems, is the channel through which measurements are transmitted from measuring current and voltage transformers. Until recently, the introduction of digital communication protocols on this level There was no talk, however, bearing in mind the emergence of a protocol for transmitting instantaneous values ​​of current and voltage IEC 61850-9-2, it is also worth dwelling on the problems of this information channel.

We will sequentially consider each of the above functions of information transfer and the existing approaches to their implementation.

Transfer of measurements from CTs and VTs

Signals from measuring current (CT) and voltage (VT) transformers are transmitted via cables with copper conductors alternating current and voltage, respectively. For this method there are typical problems that are often mentioned in the literature:

  • large ramification and length of copper cables, leading to the need to use a large number of auxiliary equipment (test blocks, terminal blocks, etc.) and, as a result, to an increase in the cost of systems and the complexity of installation and commissioning;
  • exposure of measuring circuits to electromagnetic interference;
  • the complexity or lack of the possibility of monitoring the health of the measuring channel at the rate of the process, the complexity of finding the place of damage;
  • the influence of the resistance of the measuring circuits on the measurement accuracy and the need to match the power of the CT / VT with the resistance of the circuits and the load of the receiver.

Transfer of discrete signals between devices

Transfer of discrete signals between devices is traditionally carried out by supplying an operating voltage by closing the output relay of one device to a discrete input of another.

This method of transferring information has the following disadvantages:

  • a large number of control cables are required between the equipment cabinets;
  • devices must have a large number of discrete inputs and outputs;
  • the number of transmitted signals is limited by a certain number of discrete inputs and outputs;
  • there is no possibility to control communication between devices;
  • false triggering of the discrete input of the device is possible when there is a short to ground in the signal transmission circuit;
  • circuits are susceptible to electromagnetic interference;
  • the complexity of the expansion of relay protection systems.

Data transfer between relay protection and data transfer station

Data exchange between relay protection and automation equipment at the facility is carried out in digital format. However, due to the need to integrate a large number of various devices this method has the following features:

  • the existence of a large number of different data transfer protocols, and the CPPS device for the successful integration of any devices must support all these protocols;
  • absence unified system data naming, leading to the need to maintain a large amount of descriptive documentation, as well as to difficulties and errors during setup;
  • relatively low data transfer rate due to the presence of a large number of serial interfaces.

Data transfer between the CPPS of the facility and the dispatch center

Data transfer between the facility and the dispatch center is also carried out in digital format. Typically, the IEC 60870-101 / 104 protocols are used for these purposes. Features of the implementation of these communication systems:

  • the need to transfer data in dispatch control protocols, as a rule, differing from the protocols used at the substation;
  • transfer of a limited amount of information, which is due to the need to reassign all signals from one protocol to another, and, as a result, the loss of some data, the transfer of which was not considered appropriate at the design stage;
  • the lack of uniform names of signals within the facility and in the network control centers (NCC), leading to the complexity of setting up and tracking errors.

Let's turn to fig. 1, which shows circuit diagram organization of data transfer. A large number of proprietary (proprietary) protocols should be noted. The widespread use of such protocols requires, firstly, a large number of gateways (converters), and secondly, good qualifications and experience of personnel in working with various protocols. Ultimately this leads to system complexity and operational and expansion problems.

Rice. 1. Diagram of the organization of data transmission.

Let's briefly describe the standard protocols shown.

Modbus

Modbus is one of the most common network protocols for integrating relay protection and automation devices into the automated control system, built on a client-server architecture. The popularity of this protocol is largely due to its openness, so most devices support this protocol.

The Modbus protocol can be used to transfer data over serial communication lines RS-485, RS-433, RS-232, as well as TCP / IP networks (Modbus TCP).

The Modbus standard consists of three parts, describing the application layer of the protocol, the specification of the link and physical layers, and the ADU specification for transport over the TCP / IP stack.

The advantages of this standard include its massiveness and relative ease of implementation of systems based on it. The disadvantages are the lack of the possibility of operational signaling from the end device to the master, if necessary. In addition, the standard does not allow end devices to exchange fixed data with each other without the participation of a master. This significantly limits the applicability of MODBUS solutions in real-time control systems.

IEC 60870-5-101 / 103/104

IEC 60870-5-101 - telemechanics protocol intended for transmission of TM signals to the automated control system. It is also built on a client-server architecture and is designed to transmit data over serial RS-232/485 communication lines.

The IEC 60870-5-104 protocol is an extension of the 101 protocol and regulates the use of network access via the TCP / IP protocol. IEC 60870-5-101 / 104 does not imply a semantic data model.

The IEC 60870-5-103 protocol is designed to provide the possibility of integrating relay protection and automation devices into the control system of a power facility. Unlike IEC 60870-5-101 / 104 standards, it defines semantics for a fixed set of data generated by relay protection devices. One of the main disadvantages of the IEC 60870-5-103 protocol is the relatively low data transfer rate.

IEC 60870-5-101 / 103/104 protocols provide sufficiently high functionality when solving problems of telecontrol, tele-signaling and telemetry, integrating these devices into control systems. Unlike Modbus, they also allow sporadic data transfers from devices.

The protocols, as in the previous case, are based on the exchange of tables of signals, and the types of data that are exchanged are rigidly fixed.

In general, the protocols are well suited for solving the above range of tasks, but they have a number of disadvantages:

  1. Data transfer is carried out in two stages: assignment of indexed communication objects to application objects; appointment applied objects to variables in the application database or program. Thus, there is no semantic link (in whole or in part) between the transmitted data and the data objects of the applied functions.
  2. The protocols do not provide for real-time signal transmission. In this case, real-time signals are understood as data that must be transmitted at the rate of the process with the shortest possible time delays, which include, for example, shutdown commands, transmission of instantaneous values ​​of currents and voltages from measuring transformers. When transmitting such signals, the delays in the communication channel are critical. Note that this point is not related to the ability to synchronize devices with a single time server, but concerns specifically the issues of data transfer speed between devices.

DNP3

In Russia, this standard is poorly distributed, but some automation devices still support it. Long time the protocol has not been standardized, but it is now approved as an IEEE-1815 standard.

DNP3 supports both RS-232/485 serial communications and TCP / IP networks. The protocol describes three layers of the OSI model: application, channel and physical. His distinctive feature is the ability to transfer data both from the master to the slave and between the slaves. DNP3 also supports sporadic data transmission from slaves.

As in the case of IEC-101/104, the transfer of data is based on the principle of transferring a table of values. At the same time, in order to optimize the use of communication resources, not the entire database is sent, but only its variable part.

An important difference between the DNP3 protocol and those considered earlier is the attempt to describe the data model with an object and the independence of data objects from transmitted messages. DNP3 uses an XML description of the information model to describe the data structure.

In a detailed comparison of the protocols is given, but we are in table. 1 we will give brief excerpts in relation to the protocols discussed above.

Table 1. Data transfer protocols

Parameter Protocol
Modbus IEC-101/103/104 DNP3
Communication lines RS-232/485/422
TCP / IP (Modbus TCP)
RS-232/485/422
TCP / IP (104)
RS-232/485/422
TCP / IP
Architecture Client - Server Client - Server Client - Server
Data transfer principle Sharing Indexed Data Points
Sporadic data transmission No Yes Yes
Semantic data model No No
Basic (103)
Yes
Real-time data transmission No No No

conclusions

From the presented brief analysis, it can be seen that the existing communication protocols quite successfully allow implementing the tasks of dispatch control / data integration into control systems, however, they do not allow real-time functions (such as the transfer of discrete signals between relay protection and automation devices, the transfer of instantaneous values ​​of currents and voltages).

A large number of proprietary protocols complicates the process of integrating devices into a single system:

  • The protocols must be supported by the controller and the DSPP, which requires the implementation of support for a large number of protocols in the ODR and DSPP simultaneously and leads to an increase in the cost of equipment.
  • To integrate devices using proprietary protocols, the qualification of the commissioning personnel is required to work with each of them.
  • Reassigning signals from proprietary protocols to general industrial ones and back often results in the loss of information, including additional information (such as time stamps, quality stamps, etc.).

When transferring data, a large number of serial interfaces are still used, which imposes restrictions on the data transfer rate, the amount of data transferred and the number of devices simultaneously connected to the information network.

Transmission of critical control commands (commands to disconnect switches from relay protection, operational interlocks, etc.) and digitized instantaneous values ​​of currents and voltages is impossible in digital format due to the unsuitability of existing communication protocols for transferring this kind of information.

It should be noted that the existing communication protocols do not impose requirements on the formal description of the configurations of the protocols and transmitted signals, in connection with which the project documentation for the APCS systems contains only a description of signals on solid media.

Basic provisions for the creation of the IEC 61850 standard

Work on the IEC 61850 standard began in 1995, initially with two independent, parallel working groups: one, formed by the UCA, was developing Generic Substation Equipment Object Models (GOMFSE); the second, formed under IEC Technical Committee 57, was involved in the creation of a data transfer protocol standard for substations.

Later, in 1997, the works of both groups were combined under the auspices of the working group 10 of the IEC TC 57 and became the basis of the IEC 61850 standard.

The standard is based on three provisions:

  • It should be technologically independent, that is, regardless of technological progress, the standard should be subject to minimal changes.
  • It should be flexible, that is, it should be able to solve different problems using the same standardized mechanisms.
  • It should be extensible.

The development of the first edition of the standard took about 10 years. Meeting these requirements, the standard allows you to meet the changing needs of the electric power industry and use the latest advances in the field of computer, communication and measurement technologies.

Today IEC 61850 consists of 25 different documents (including those under development) that cover a wide range of issues and make it much more than just a specification for a number of communication protocols. Let's note the main features of the standard:

  • Determines not only how information should be exchanged, but also what information should be exchanged. The standard describes abstract models of facility equipment and functions performed. The information model underlying the standard is represented in the form of classes of data objects, data attributes, abstract services and descriptions of the relationships between them.
  • Defines the process for designing and setting up systems.
  • Defines the System Configuration description Language (SCL). This language provides the ability to exchange information about the configuration of devices in a standardized format between software from different manufacturers.
  • Describes equipment testing and acceptance procedures.

When working with IEC 61850, you need to understand that the standard:

  • does not describe specific implementation methodologies, communication architectures, or requirements for specific products;
  • does not standardize the functionality and algorithms of devices,
  • focused on description functionality primary and secondary equipment, protection, control and automation functions visible from the outside.

Of course, such a large-scale work cannot be perfect. Examples of inaccuracies and flaws in the standard, in particular, are the lack of methods for formal verification of compliance with the requirements of the standard, a number of technical inaccuracies in the description of parameters and approaches to their processing, and so on. These issues will be discussed in more detail in future publications.

The disadvantages of the standard often include the lack of specificity in describing the requirements and too much freedom in implementation, which, according to the developers, is precisely one of its main advantages.

Bibliography

  1. O.I. Bagleybter Current transformer in relay protection networks. Counteraction to CT saturation of the aperiodic component of the short-circuit current // News of ElectroTechnics. 2008. No. 5 (53).
  2. Schaub P., Haywood J., Ingram D., Kenwrick A., Dusha G. Test and Evaluation of Non Conventional Instrument Transformers and Sampled Value Process Bus on Powerlink's Transmission Network. SEAPAC 2011. CIGRE Australia Panel B5.
  3. Shevtsov M.V.Transmission of discrete signals between URZA on digital channels connection // Relay. 2009. No. 1.
  4. Schwarz K. Comparison of IEC 60870-5-101 / -103 / -104, DNP3, and IEC 60870-6-TASE.2 with IEC 61850 (electronic document: http://bit.ly/NOHn8L).
  5. Brunner C., Apostolov A. IEC 61850 Brand New World. PAC World Magazine. Summer 2007.
  6. IEC 61850-1: Introduction and Overview.

Many of us have heard of the term “proprietary”. They could call them anything. Hardware, drivers, programs. What is this term and what does it mean? In general, this word came to us from the English language. In a general sense, "proprietary" is a licensed product. And we will analyze the meaning of this term in more detail below.

The term "proprietary". Meaning

The word "proprietary" comes from the English word proprietary, which means "personal" or "proprietary." This term is used to mean something "not free". Whether it's software or hardware. This means that, for example, it has a certain license. Naturally, you will have to pay money to use the licensed software. And in some cases - a lot of money.

In the case of hardware, the meaning of the word "proprietary" changes slightly. This term usually refers to some unique piece of equipment that characterizes a particular manufacturer. For example, Apple smartphones have unique and proprietary connectors for charging the device.

Proprietary software (software)

Proprietary software includes both drivers for some devices, and source code... In principle, this is correct. It is not known what troubles our all sorts of fans could have done to delve into the computer "hardware" and programs, if the developers had opened the source code. Plus, hardware software is usually free. It is worth buying the same video card, and technical support in the form of drivers will be available constantly as long as its relevance remains.

Proprietary software is not that simple. A prominent representative of companies using proprietary licenses is Adobe with its "eternal" Photoshop. You will have to pay serious money to use Adobe products. And Microsoft is not far off. For using the proprietary Windows 10 OS, for example, you will have to deduct a substantial amount every year. This is now a common practice among major software manufacturers.

Pros and cons of proprietary software

To better understand proprietary software, you need to compare its pros and cons. So, the pluses include the following points:

  • Continuous technical support for the product.
  • More stable work compared to free software.
  • Guaranteed absence of malicious objects (viruses).
  • Automatic software updates.
  • Qualitative use of all the capabilities of the equipment.

These were all pluses of proprietary software. Now let's move on to the cons:

  • A solid amount of payment for the license.
  • Proprietary protocols of the device.
  • Developer dependency.
  • Inability to change the source code.
  • Restriction on Software Distribution.
  • Restriction on software modification.

As we can see, the biggest disadvantage is the payment of a license for using such software. The rest of the cons are insignificant for the average user. For such users, "proprietary" is the best choice. If, of course, money is not a pity.

Proprietary drivers

Device drivers are also proprietary. In this case, "proprietary" means that they come from a closed source vendor. Making any changes to their structure is impossible. Typically, such drivers are relevant to those using open source software. This is especially true for free "Linux-like" systems. They usually use proprietary drivers. Ubuntu, for example, uses both free and proprietary software all over the place.

Such drivers for Ubuntu are more stable than free ones. Which is not surprising, because no one "poked" in their source code. But, unlike free drivers, users of "closed" software have to wait a long time for a fresh updated version. Here we have to choose: novelty or stability.

Proprietary equipment

In the case of equipment, proprietary is unparalleled. Usually this word denotes some kind of "chip" of the manufacturer. For example, a unique connector. In addition to their special shape, such connectors also support certain protocols, for example, data transmission or power management.

Proprietary connectors for their devices "sin" Apple... They have everything patented and "closed". If you decide to connect, for example, an unlicensed Charger- your device may even burn out. Such equipment is usually made only with the intention of ripping off more money from users. There are no obvious advantages in using proprietary connectors. This does not affect the data transfer rate or the charging speed of the device.

Conclusion

Now we can quite understand what is called the smart word "proprietary". This is a product that is licensed and has a specific owner. It is impossible to use such a product without payment. In addition, such products (software and drivers) are closed source, and no means will be able to make changes to their structure.

Our small educational program finished. Now you can safely choose between proprietary and free software, knowing all its pros and cons. The meaning of the word "proprietary" is now known to us.

You've probably heard the phrase "you need to use a VPN to protect your privacy." Now you're thinking, "Okay, but how does a VPN work?"

There are many articles on the internet that suggest using a single protocol, but few have taken the time to explain some of the basic VPN technologies. In this article, I will explain what VPN protocols exist, their differences, and what you should be looking for.

What is VPN?

Before we look at specific VPN protocols, let's take a quick look at what a VPN is.

Basically, a VPN allows you to access the public internet using a private connection. When you click a link on the internet, your request goes to the correct server, usually returning the correct content. Your data essentially flows unhindered from A to B, and a website or service can see your IP address among other identifying data.

When you use a VPN, all your requests are first routed through a private server owned by the VPN provider. Your request goes from A to C through B. You can access all the data previously available to you (and in some cases, more). But the website or service only has the details of the VPN provider: their IP address, etc.

There are many uses for a VPN, including protecting your data and identity, avoiding repressive censorship, and encrypting your messages.

What are VPN protocols?

The VPN protocol defines exactly how your data is routed between your computer and the VPN server. Protocols have different specifications, offering users benefits in a wide variety of circumstances. For example, some prioritize speed while others focus on privacy and security.

Let's take a look at the most common VPN protocols.


1. OpenVPN

OpenVPN is an open source VPN protocol. This means that users can check the source code for vulnerabilities or use it in other projects. OpenVPN has become one of the most important VPN protocols. OpenVPN is open source and is one of the most secure protocols. OpenVPN allows users to secure their data using essentially unbreakable AES-256 key encryption (among others), with 2048-bit RSA authentication and 160-bit SHA1.

Besides providing strong encryption, OpenVPN is also available for almost every platform: Windows, macOS, Linux, Android, iOS, routers, and more. Even Windows Phone and Blackberry can use it!

OpenVPN has been criticized in the past for low speeds... However, recent implementations have resulted in some acceleration, and the focus on security and privacy deserves attention.


2. L2TP / IPSec

Layer 2 Tunnel Protocol is a very popular VPN protocol. L2TP is the successor to the discounted PPTP (see PPTP below for more details) developed by Microsoft and L2F developed by Cisco. However, L2TP does not actually provide any encryption or confidentiality.

Accordingly, services using L2TP are often associated with IPsec security. Once implemented, L2TP / IPSec becomes one of the most secure VPN connections... It uses AES-256 bit encryption and has no known vulnerabilities (although IPSec was allegedly compromised by the NSA).

However, while L2TP / IPSec has no known vulnerabilities, it does have some minor flaws. For example, by default the protocol uses UDP on port 500. This makes it easier to identify and block traffic.


3. SSTP

Secure Socket Tunneling Protocol is another popular VPN protocol. SSTP has one notable advantage: it is fully integrated with each operating system Microsoft since Windows installations Vista Service Pack 1 (SP1). This means that you can use SSTP with Winlogon or for increased security with a smart chip. In addition, many VPN providers have Windows specific SSTP instructions built in. They can be found on the website of the VPN provider.

SSTP uses 2048-bit SSL / TLS certificates for authentication and 256-bit SSL keys for encryption. Overall, SSTP is reasonably secure.

SSTP is essentially a proprietary Microsoft protocol. This means that no one can fully verify base code... However, most of them still consider SSTP to be secure.

Finally, SSTP has built-in support Windows systems, Linux and BSD. Android, macOS and iOS support third-party clients.


4. IKEv2

Internet Key Exchange version 2 is another VPN protocol developed by Microsoft and Cisco. IKEv2 itself is a tunneling protocol that provides secure key exchange sessions. Therefore (like its predecessor), IKEv2 often pairs with IPSec for encryption and authentication.

While IKEv2 is not as popular as other VPN protocols, it has many mobile VPN solutions. This is due to the fact that he is able to reconnect in moments of temporary loss of Internet connection, as well as during a network switch (for example, from Wi-Fi to mobile data).

IKEv2 is a proprietary protocol with native support for Windows, iOS and Blackberry devices. Open source versions are available for Linux, and Android support is available through third-party apps.

Unfortunately, while the ikev2 protocol is great for mobile communications There is strong evidence that the American NSA is actively exploiting its flaws to undermine IPSec traffic.

5. PPTP

Point-to-Point Tunneling Protocol is one of the oldest VPN protocols. It is still used in some places, but most of the services have long been updated to faster and more secure protocols.

PPTP was introduced back in 1995. It was actually integrated with Windows 95, designed to work with dial-up connections. This was extremely helpful at the time.

But VPN technology has progressed and PPTP is no longer secure. Governments and criminals have long violated PPTP encryption, making any data sent using the protocol insecure.

However, it is not quite dead yet .... You see, some people think that PPTP gives the best connection speeds, precisely because of the lack of security features (compared to modern protocols). As such, it is still in use as users just want to watch Netflix from a different location.

Summarizing VPN Protocols

We've covered five main VPN protocols. Let's quickly summarize their pros and cons.

OpenVPN: is open source, offers the strongest encryption, suitable for all activities, has slightly slower connection times.

L2TP / IPSec: widely used protocol, good speeds, but easily blocked due to dependence on one port.

SSTP: good security, difficult to block and detect.

IKEv2: fast, convenient for mobile devices, with multiple open source implementations (potentially undermined by the NSA).

PPTP: fast, widely supported but full of security holes, used only for streaming and basic web browsing.

For complete security and peace of mind, choose a VPN provider that offers you a choice of protocol. Alternatively, you can use paid VPN solutions like ExpressVPN instead of free service... When you pay for a VPN, you are buying a service. When do you use free VPN, you have no idea what they can do with your data.

What VPN protocol do you usually use? Does your VPN offer a choice? Let us know in the comments!