IT Cloud - Eugeny Shtoltc

IT Cloud

Автор

Страниц

385

Год

2021

В данной книге Главный архитектор Департамента архитектуры Облаковых решений в Сбербанке делится своими знаниями и опытом с читателем относительно создания и перехода к облачной экосистеме, а также создания и адаптации приложений для нее. В книге автор пытается провести читателя по пути, избегая ошибок и сложностей. Для этого демонстрируются и объясняются практические применения, чтобы читатель мог использовать их в качестве инструкций для образовательных и рабочих целей. Читателем могут быть как разработчики разного уровня, так и экосистемные специалисты, которые желают не потерять актуальность своих навыков в уже измененном мире.

Книга также содержит дополнительную информацию о последних тенденциях в области облачных технологий, предоставляя читателю новые инсайты и перспективы на будущее развития этой области. Автор включает свой собственный опыт работы с облачными системами, что придает тексту уникальность и особое значение для читателя.

Основываясь на реальных примерах, описанных в книге, читатель сможет легко освоить не только теоретические основы облачных решений, но и научиться применять их на практике. Автор предоставляет читателям не только инструкции, но и возможность задавать вопросы и общаться с экспертами в данной области.

Эта книга открывает новые горизонты для разработчиков и экосистемных специалистов, позволяя им оставаться актуальными и конкурентоспособными в сфере облачных технологий. Материалы, представленные в книге, помогут читателям успешно преодолеть сложности и приступить к созданию современных облачных приложений, которые полностью соответствуют требованиям и потребностям современного рынка.

Читать бесплатно онлайн IT Cloud - Eugeny Shtoltc

Prologue

More than 70 (76) tools are considered in practice in the book:

* Google Cloud Platform, Amazone WEB Services, Microsoft Azure;

* console utilities: cat, sed, NPM, node, exit, curl, kill, Dockerd, ps, sudo, grep, git, cd, mkdir, rm, rmdir, mongos, Python, df, eval, ip, mongo, netstat, oc, pgrep, ping, pip, pstree, systemctl, top, uname, VirtualBox, which, sleep, wget, tar, unzip, ls, virsh, egrep, cp, mv, chmod, ifconfig, kvm, minishift;

* standard tools: NGINX, MinIO, HAProxy, Docker, Consul, Vagrant, Ansible, kvm;

* DevOps tools: Jenkins, GitLab CI, BASH, PHP, Micro Kubernetes, kubectl, Velero, Helm, "http load testing";

* cloud Traefic, Kubernetes, Envoy, Istio, OpenShift, OKD, Rancher ,;

* several programming languages: PHP, NodeJS, Python, Golang.

Containerization

Infrastructure development history

Limoncelli (author of "The Practice of Cloud System Administration"), who worked for a long time at Google Inc, believes that 2010 is the year of transition from the era of the traditional Internet to the era of cloud computing.

* 1985-1994 – the time of mainframes (large computers) and intra-corporate data exchange, in which you can easily plan the load

* 1995-2000 – the era of the emergence of Internet companies,

* 2000-2003

* 2003-2010

* 2010-2019

The increase in the productivity of a separate machine is less than the increase in cost, for example, an increase in productivity by 2 times leads to an increase in cost significantly more than 2 times. At the same time, each subsequent increase in productivity is much more expensive. Consequently, each new user was more expensive.

Later, in the period 2000-2003, an ecosystem was able to form, providing a fundamentally different approach:

* the emergence of distributed computing;

* the emergence of low-power mass equipment;

* maturation of OpenSource solutions, allowing you to install software on multiple machines, not bundled with a processor license;

* maturation of telecommunication infrastructure;

* increasing reliability due to the distribution of points of failure;

* the ability to increase performance if needed in the future by adding new components.

The next stage was unification, which was most pronounced in 2003-2010:

* providing in the data center not a place in the closet (power-location), but already unified hardware purchased in bulk for the whole cent;

* saving on resources;

* virtualization of the network and computers.

Amazon set another milestone in 2010 and ushered in the era of cloud computing. The stage is characterized by the construction of large-scale data cents with a deliberate surplus in capacity to obtain a lower cost of computing power due to wholesale, based on savings for oneself and a profitable sale of their surplus at retail. This approach is applied not only to computing power, infrastructure, but also software, forming it as services to reduce the cost of their use by selling them at retail to both large companies and beginners.

The need for uniformity of the environment

Usually, novice Linux developers prefer to work from under Windows, so as not to learn an unfamiliar OS and stuff their own cones on it, because before everything was far from so simple and so debugged. Often, developers are forced to work from under Windows because of corporate preferences: 1C, Directum and other systems run only on Windows, and the rest, and most importantly the network infrastructure, is tailored for this operating system. Working from Windows leads to a large loss of working time for both developers and DevOps on fixing both minor and major differences in operating systems. These differences start to show up from the simplest tasks, for example, that it may be easier to make a page in pure HTML. But an incorrectly configured editor will put in the BOM and line feeds accepted in Windows: "\ n \ r" instead of "\ n"). BOM, when gluing the header, body and footer of the page, will create indents between them, they will not be visible in the editor, since these are formed by bytes of meta information about the file type, which in Linux do not have such a meaning and are perceived as translation of the indentation. Other newlines in GIT do not allow you to see the difference you made, because the difference is on each line.