Home computickets docker vs vagrant

docker vs vagrant

Author

Date

Category

There is a need for a team to use a locally medium identical or approximate to the real product. There is experience using Vagrant (VirtualBox + Vagrant + Homestead), but also advised Docker, with him acquaintance only by articles.

Does it make sense to use it instead of Vagrant?

Technology Stack Such: Node.js, PHP, Redis, MySQL, may soon add task queue processing in the near future.

For manuals or articles I will be grateful.


Answer 1, Authority 100%

What is Vagrant and Docker?

In general, under Docker and Vagrant, we mean simply virtualization tools with varying degrees of this virtualization, docker – as a virtualization tool at the Linux kernel level (now not only Linux, but I do not take it, what is the case on the Mac and windows), Vagrant – as a complete virtualization (it is difficult to even correctly characterize what it is, but if in the first case everything happens inside the Linux kernel, then the car is virtualized completely, and it provides virtual operational memory, processor, and other devices – VM thinks What communicates with the iron, which does not actually exist).
In fact, everything is more different: docker and really provides the above-described virtualization, and Vagrant is simply a virtualization tool manager, and the above passage about “full virtualization” actually refers to the VirtualBox that the Vagrant is managed. Vagrant has a provider’s notion (Virtualization provider), and Vagrant is able to manage both VirtualBox and Docker.

Why it is necessary / what it allows you to make

You do not specify in your question, which goals are pursuing, so you have to write, what problems do it solve and what problems is taken to solve a tool.

virtualization as such there are many pleasant bonuses, including:

  • simple and easy limit of processes in resources
  • Isolation of resources
  • Ability to supply not applications as such, but fully a ready-made solution that needs not Install and configure , but simply Run
  • indirect manner – repeatability of processes

it is very good in terms of infrastructure, because it allows you to move from the model – & gt; repaired to the model broke – & gt; Deployed anew , which is much easier to automate and saves the cloud of resources.

From the point of view of development, two key pluses appear that can change the development paradigm:

  • Assembling the application inside the virtual machine turns over the situation from “running on my machine” in “works in this virtual machine.” The engineer who should deploy an application, getting it in the virtual machine, knows that it is necessary to ensure that it is necessary to provide only the launch of the image itself, spends interest on ninety less than the time to destroy the launcher.
  • Using virtual machines allows the developer to completely simulate the infrastructure in which the project will be deployed that again allows you to reduce the costs with a certain problem “on my machine everything works” and keep several versions of software for different projects (if the project A uses PHP 5.3, And the project B uses PHP 7.0, instead of juggling versions on the working machine, the developer can keep them in separate isolated machines).

In this case, of course, interests the second moment.

and vagrant + VirtualBox, and Docker allow it to be implemented, therefore there is no fundamental difference, so it makes sense to just describe the concepts of both tools.

VirtualBox by itself is just a virtual machine manager that allows you to run them from images. Everything is simple: the image of the system files is taken, the processor, the RAM, hard disk, is emulated, the virtual machine is looking for a boot set on a virtual disk, and the car starts up, as if it is real.

Vagrant is a virutation media manager, and can actually run not only VirtualBox, but also docker. Specifically, in this section, Vagrant allows you to automate the creation of a virtual machine (or virtual machines) VirtualBox from the image of a clean operating system, applied by the so-called. Provision – initializing the machine using a shell script or configuration manager.

Docker in itself provides exactly the same virtualization functionality, but very different in terms of concept. It uses the following functionality to implement its goals:

  • Copy-On-Write Layers System for implementing the file system. The final file system is collected from several layers, as a result of which images can divide the lower layers, and for each virtual machine, its own layer is used (top in this composition), so the machines can use a common base that you do not need to copy for the start of the machine.
  • namespace system in the Linux kernel (again it makes it difficult to say how it is organized in Mac / Windows)

This allows you to relate to machines as consumable and re-possession of hundreds of times per second (this is a very fast process compared to raising a full-fledged virtual machine). Due to this, the Docker philosophy was formed, in which the Process-Per-Container rule was taken (one process to the container), and instead of distributing virutical machines, the Docker concept implies the distribution of applications, each of which is wrapped in its image of the virutical insulation machine.

From the point of view of virtualization inside, there is practically the same as with a regular virtual machine: the root process is started, which lifts all the others. In images of VAGRANT-VirtualBox, the usual init, in Docker images – the application itself is, thus it turns out that the VAGRANT-VirtualBox virtual machine launches the OS manager, which follows all what is happening, and in the Docker-image – nothing but the end application.

Therefore, when using the Docker infrastructure, the key difference will be that Vagrant + VirtualBox is usually used to raise one machine with everything in light, and in the case of Docker, it will be raised by a container to each service (Redis, database, queue service, etc .).

Thus, VirtualBox + Vagrant and Docker differ mainly by the concept that does not carry a functional difference. However, it should be noted that the community for each tool is more developed in the direction in which the tool itself is used: Using Docker, you can use publicly available insulated machines with one process in each machine and configuration through environment variables using Vagrant, you You get access to machines configurable through Configuration Manager (Chef, Puppet, SaltStack) or simply through a shell script and ready to install a large number of services per machine.

Is there any difference

In the comments to the question I took the position of the “criminal difference between them”, and intends to continue to take it.

What Vagrant-top-VirtualBox is that Docker actually provide the same functionality:

  • virtual machine with one root process (strictly speaking, so all the cars work)
  • distributed images of these virtual machines
  • The ability to prompted the ports of the virtual machine to host
  • Mounting Host Directory Inside Virtual Machine
  • the ability to organize the internal network of virtual machines
  • Startup / stopping the composition of virtual machines
  • Ability to save, unload and loading snapshot virtual machine
  • Restriction of the virtual machine for resources

Therefore, there is really no functional difference, on which you will build your infrastructure, no: you can smoothly translate the Vagrant to the “One Process on Machine” philosophy or shove everything in the light of one Docker-Container. However, Docker has a number of advantages that do not affect the virtualization functionality itself, which simply make it use more convenient and economical (and which formed his philosophy):

  • No costs for full virtualization
  • COW layer system that saves space when using multiple machines with a common ancestor
  • Almost instant start due to the fact that there is no need to initialize the system (boot the kernel, mount disks and all that incredible park of operations that happens when the machine is turned on), and the container only needs to start the binary specified for it by the entry point.

Because of this, the container’s life cycle (on the dev machine) is generally much shorter than that of the virtual machine (and, as a bonus, this allows you to zero out the database in almost one command).

Dealing with Docker can be a little more complex, but the overhead of starting and stopping service composition is reduced, and exporting the final application as a virtual machine image can be less time-consuming than with Vagrant + VirtualBox. Nevertheless, I would not write off vagrant myself, because until the world switched to container technologies entirely, it recreates the target environment more closely (one machine with everything installed inside), and, besides, no one forbids use it in conjunction with Docker.


Answer 2, authority 16%

Docker and Vagrant are slightly different things.

Docker – in fact, this is a kind of container in which you can shove the operating system, your software and database and run it. Containers have layers, OS is the bottom layer, software is above the layer, and your app and data is the top layer. Containers are convenient for transferring an application and supporting environment to a server or cloud. Docker does not create a virtual system, but only emulates it by forwarding system calls and works stably only in Linux. Recently, a version for macOS has appeared. Usually the boxing error is described in the Dockerfile.

Vagrant, on the other hand, is based on virtualization systems (VirtualBox, libvirt, xen). From the very beginning, it creates a full-fledged virtual machine, runs the provision script, which sets up the entire environment and configures it. Vagrant works on all operating systems by its very nature.

Typically Vagrant is used for local development. In principle, it is intended for this. You started the virtual machine, played with it, and demolished it. Or rebuilt again. There is room for experimentation.

Docker, on the other hand, works as a bundler of software in a kind of box, which is convenient to move later somewhere and run somewhere.

It’s good to try both systems anyway.


Answer 3, authority 13%

Vargant – ready-made virtual machine

Docker – a set of isolated containers

Docker can be called isolation from the environment (settings) of the operating system. The point of Docker is that you can run the program almost anywhere without configuring it for another OS (without installing libraries, without writing paths, ports, without creating configuration files if possible, etc.).

We use Docker with the docker-compose bundle in the command, deploying the entire project is reduced to the command docker-compose up -d and describing some specific environment variables in the file for the current machine (environment variables – it’s convenient).

The changes do not affect your operating system and you can work on several projects at the same time, it does not waste much computer resources, there is no need to worry that there will be no library.

It is convenient to test how your project works on a new version of some software, described a new container with PHP 7.1 in a file, instead of PHP 7, switched ports and checked how your project was installed on the new version. I picked up a new system library, rebuilt it in a few minutes, did not start, uncomment the line with the old library, rebuild it and everything is fine again.

There are disadvantages in the form of network configuration, logging, management, but with each release, the developers try to fix all existing problems.


Answer 4, authority 8%

Worked with Vagrant and Docker.

Vagrant is very convenient for creating a virtual machine in which data will live between vagrant up and vagrant halt – a long time, which is very necessary when you work on a project. Using containers for in this case is not entirely appropriate.

Vagrant allows you to install a box with an operating system that will be in production. Then the tools are installed that will spin together. An ansible / puppet / chief / salt / bash provision script is written, which raises everything to vagrant up . The created box can be safely packed and sent to the storage. After that, the new developer downloads the box and picks it up and gets a ready-made env for development. The data life cycle is long and everything is preserved between stops, unlike Docker (we don’t take Volumes, they are not needed there).

Docker is very useful if you need to test the result on a specific set of data. Launched, checked, uninstalled.

Of course, docker can be used delivery to production, this is very convenient BUT, if the project is not complex and consists of a maximum of 1 – 2, do not count db, servers. Docker-compose is still very crude to run in production due to the fact that it cannot manage communications between applications well enough.

If you need a tool to get closer to a production server and a headache pill for new developers, feel free to use Vagrant.


Answer 5, authority 8%

Both Docker (creating containers) and vagrant (creating a box that can be shared) are suitable for creating an identical environment.
That being said, it’s worth remembering that Docker is a bit of a different story. Each application will need to be packaged in a separate container, which will force you to perform unnecessary body movements.

I recommend a bundle from Vagrant + Ansible . The difference will be in the initialization command (docker-compose up -d versus vagrant up ), but changing the environment will be much more comfortable, flexible and faster, and you will also be deprived of unnecessary layers and restrictions that Docker has.

Update # 1

Unnecessary layers for you, not for Docker.
I’ve talked about everything related to Net , FS , etc.
Take Celery , a distributed job queue, for example.
Installation is pretty simple: install Python , install Celery , open the config and configure it as you need, additionally installed a web muzzle.

It’s very simple, but not when using Docker :

  1. Create containers for Celery server, Celery workers, web muzzle.

  2. Configure ports for communication.

  3. Share the config to all containers.

Additionally, let’s say your workers should run tasks written in PHP. This means that containers must contain additional PHP.

It may sound like a matter of 5 minutes, but in reality, such tasks can take half a day in peace.

PS: I myself initially used the bundle Vagrant + puppet . But he refused it, tk. there were constant problems with puppet (during the installation process, sometimes the Internet could be lost for a second, or for other reasons the application could return an error code. This caused the entire installation process to fail).

This prompted me to switch to Docker (separate parts). In fact, when everything is already set up, then everything is fine, but here are the things that annoy me very much about it (for this kind of tasks)

  1. DB in a container – there is no choice, but there are many problems. Pitfalls can appear throughout the development cycle.

  2. Docker doesn’t make it easy, it just makes the development process worse.

Update # 2

Docker is an excellent thing, but it is for another. You can use it, but get ready for shamanities. For example, for Migi or Radish, it is necessary to turn off THP, and in docker / sys / container read-only , because of this, you all Equally, it will be necessary to edit the host on which to spin docker . Collect a bunch of containers is also not always convenient. You have a simple task – create a single environment for a group of programmers, i.e. In the simplest case, you can simply share the virtual image. But this is not quite practical, a more complex option – create a script that will set the necessary dependencies, for this, is enough vagrant + the script itself. More elegant option to use configuration manager (puppet and similar). I did not work with Puppet, I chose ansible (which immediately approaches the deployment of the application itself).

Also, you should not leave aside the processes inside the command – very often after the developer took changes from the repository, it needs to perform a set of actions (migration, cache cleaning, etc.). This chain of actions can be imposed on the shoulders Ansible .


Answer 6, Authority 7%

docker IMHO makes sense to use only when a dynamically expandable infrastructure is needed. Those. in the cloud. And also if the company has many different environments on PROD servers and at the same time they use Docker.
Otherwise, it’s just an excessive time spending.

Here is an example.
We need to change something in the infrastructure of one server.
In case if we did not use Docker, we would just need to change a couple of lines in ANSIBLE (or similar) and start the playback with the tag of the desired software. 3-5-10 seconds and everything is ready.

In the case of Docker, you need to rebuild the entire image (5-10-20 minutes)
If there is a lot of edit, you can forget about what worked at all.
Next …

Docker and MacOS:

  1. When mounting a folder in the host system inside the federa folder, receives the rights 1000: STAFF. Yes, it is solved, but on average, people have a day 2-3 (from those who know) to find the answer.
  2. Native Driver Synchronization speed. Many are solved through the use of NFS, or RSync / Unison-based special (!) Software
  3. All that uses MMAP on the sections mounted in the host will be searched with critics. For example, Mongodb.

Among other things, all images must be stored to transmit.
And if you do not want to lay out in open access, you will have to search for service, or organize your server that will need to be supported.

If the question is only about a local environment – Docker is definitely not your option.

Vagrant In this regard, it is better, but there are pitfalls there.
For example, NGINX SendFile does not work.
Bild boxes also need entirely.

What to do?
For myself, I took the VPS, anneasibal in a few seconds I immediately react the necessary updates.
Once a month I make a full update.
Configured One-Way replication dklab_realsync (in the article justified why one-way is enough).
All batch managers, Git IPP work out locally, and then replication occurs.

Advantages:

  1. Primary replication takes no more than a minute.
  2. Easy to show work to colleagues / customers. The server is always available from outside.
  3. everything is configured at times easier and the entry threshold is minimal.

Answer 7, Authority 2%

Docker: Manuals on the network is full. From the point of view of orchestration, the containers are very convenient when you have a server cluster, solutions mass.

We are already 1.5 years we use Rancher.com (OpenSource).

docker is easier and faster in study, as well as in operation IMHO

Programmers, Start Your Engines!

Why spend time searching for the correct question and then entering your answer when you can find it in a second? That's what CompuTicket is all about! Here you'll find thousands of questions and answers from hundreds of computer languages.

Recent questions