Provided by: lxc_0.7.5-3ubuntu52_amd64 bug


       lxc.conf - linux container configuration file


       The  linux  containers (lxc) are always created before being used. This creation defines a
       set of system resources to  be  virtualized  /  isolated  when  a  process  is  using  the
       container.  By  default, the pids, sysv ipc and mount points are virtualized and isolated.
       The other system resources are shared across containers, until they are explicitly defined
       in  the configuration file. For example, if there is no network configuration, the network
       will be shared between the creator of the container and the container itself, but  if  the
       network  is  specified, a new network stack is created for the container and the container
       can no longer use the network of its ancestor.

       The configuration file defines the different system  resources  to  be  assigned  for  the
       container.  At  present,  the utsname, the network, the mount points, the root file system
       and the control groups are supported.

       Each option in the configuration file has the form key = value fitting in  one  line.  The
       '#' character means the line is a comment.

       Allows  to  set the architecture for the container. For example, set a 32bits architecture
       for a container running 32bits binaries on a 64bits host. That fix the  container  scripts
       which rely on the architecture to do some work like downloading the packages.

              Specify the architecture for the container.

              Valid options are x86, i686, x86_64, amd64

       The  utsname  section  defines  the  hostname  to be set for the container. That means the
       container can set its own hostname without changing the one from the  system.  That  makes
       the hostname private for the container.

              specify the hostname for the container

       The  network  section defines how the network is virtualized in the container. The network
       virtualization acts at layer two. In order to use the network  virtualization,  parameters
       must  be  specified  to  define  the  network interfaces of the container. Several virtual
       interfaces can be assigned and used in a  container  even  if  the  system  has  only  one
       physical network interface.

              specify what kind of network virtualization to be used for the container. Each time
              a field is found a new round of network configuration  begins.  In
              this  way,  several  network  virtualization  types  can  be specified for the same
              container, as well as assigning several network interfaces for one  container.  The
              different virtualization types can be:

              empty: will create only the loopback interface.

              veth:  a peer network device is created with one side assigned to the container and
              the other side is attached to a bridge specified by the  If  the
              bridge is not specified, then the veth pair device will be created but not attached
              to any bridge. Otherwise, the bridge has to be setup  before  on  the  system,  lxc
              won't  handle  any configuration outside of the container.  By default lxc choose a
              name for the network device belonging to the outside of the container, this name is
              handled  by  lxc, but if you wish to handle this name yourself, you can tell lxc to
              set a specific name with the option.

              vlan:  a  vlan  interface  is  linked  with  the   interface   specified   by   the
      and  assigned  to the container. The vlan identifier is specified
              with the option

              macvlan: a macvlan  interface  is  linked  with  the  interface  specified  by  the
     and assigned to the container. specifies
              the mode the macvlan will use to communicate between different macvlan on the  same
              upper  device.  The  accepted modes are private, the device never communicates with
              any other device on the same upper_dev (default), vepa, the  new  Virtual  Ethernet
              Port Aggregator (VEPA) mode, it assumes that the adjacent bridge returns all frames
              where both source and destination are local to the macvlan port, i.e. the bridge is
              set  up  as  a reflective relay.  Broadcast frames coming in from the upper_dev get
              flooded to all macvlan interfaces in VEPA mode,  local  frames  are  not  delivered
              locallay,  or bridge, it provides the behavior of a simple bridge between different
              macvlan interfaces on the same port. Frames from one interface to another  one  get
              delivered directly and are not sent out externally. Broadcast frames get flooded to
              all other bridge ports and to the external interface, but when they come back  from
              a  reflective  relay,  we  don't  deliver  them  again.   Since we know all the MAC
              addresses, the macvlan bridge mode does not require learning or STP like the bridge
              module does.

              phys:  an  already existing interface specified by the is assigned
              to the container.

              specify an action to do for the network.

              up: activates the interface.

              specify the interface to be used for real network traffic.

              the interface name is dynamically allocated, but if another name is needed  because
              the  configuration  files being used by the container use a generic name, eg. eth0,
              this option will rename the interface in the container.

              the interface mac address is  dynamically  allocated  by  default  to  the  virtual
              interface,  but  in some cases, this is needed to resolve a mac address conflict or
              to always have the same link-local ipv6 address

              specify the ipv4 address to assign to  the  virtualized  interface.  Several  lines
              specify   several  ipv4  addresses.   The  address  is  in  format  x.y.z.t/m,  eg.
     The broadcast address should be specified on the same line, right
              after the ipv4 address.

              specify  the  ipv6  address  to  assign to the virtualized interface. Several lines
              specify  several  ipv6  addresses.   The  address  is   in   format   x::y/m,   eg.

              add  a  configuration  option to specify a script to be executed after creating and
              configuring the network used from the host side. The following arguments are passed
              to  the  script:  container name and config section name (net) Additional arguments
              depend on the config section employing a script hook; the following are used by the
              network  system:  execution  context  (up), network type (empty/veth/macvlan/phys),
              Depending on the network type, other arguments may  be  passed:  veth/macvlan/phys.
              And finally (host-sided) device name.

       For stricter isolation the container can have its own private instance of the pseudo tty.

              If  set,  the container will have a new pseudo tty instance, making this private to
              it. The value specifies the maximum  number  of  pseudo  ttys  allowed  for  a  pts
              instance (this limitation is not implemented yet).

       If the container is configured with a root filesystem and the inittab file is setup to use
       the console, you may want to specify where goes the output of this console.

              Specify a path to a file where the console output  will  be  written.  The  keyword
              'none'  will  simply  disable  the console. This is dangerous once if have a rootfs
              with a console device file where the application can write, the messages will  fall
              in the host.

       If  the  container  is  configured with a root filesystem and the inittab file is setup to
       launch a getty on the ttys. This option will specify the number of ttys  to  be  available
       for  the container. The number of getty in the inittab file of the container should not be
       greater than the number of ttys specified in this configuration file, otherwise the excess
       getty sessions will die and respawn indefinitly giving annoying messages on the console.

              Specify the number of tty to make available to the container.

       LXC  consoles  are  provided through Unix98 PTYs created on the host and bind-mounted over
       the  expected  devices  in  the  container.   By  default,  they  are  bind-mounted   over
       /dev/console  and  /dev/ttyN.   This can prevent package upgrades in the guest.  Therefore
       you can specify a directory location (under /dev under which LXC will create the files and
       bind-mount  over  them.   These  will  then  be  symbolically  linked  to /dev/console and
       /dev/ttyN.  A package upgrade can then succeed as it is able to  remove  and  replace  the
       symbolic links.

              Specify a directory under /dev under which to create the container console devices.

       The  mount points section specifies the different places to be mounted. These mount points
       will be private to the container and won't be visible by the processes running outside  of
       the container. This is useful to mount /etc, /var or /home for examples.

              specify  a file location in the fstab format, containing the mount informations. If
              the rootfs is an image file or a device block and the fstab  is  used  to  mount  a
              point  somewhere  in  this  rootfs,  the  path  of the rootfs mount point should be
              prefixed with the /usr/lib/lxc/root default path or the value  of  lxc.rootfs.mount
              if specified.

              specify a mount point corresponding to a line in the fstab format.

       The root file system of the container can be different than that of the host system.

              specify  the  root  file  system  for  the  container.  It  can be an image file, a
              directory or a block device. If not specified, the container shares its  root  file
              system with the host.

              where to recursively bind lxc.rootfs before pivoting.  This is to ensure success of
              the pivot_root(8) syscall.  Any directory suffices, the  default  should  generally

              where to pivot the original root file system under lxc.rootfs, specified relatively
              to that.  The default is mnt.  It is created if necessary, and also  removed  after
              unmounting everything from it during container setup.

       The control group section contains the configuration for the different subsystem. lxc does
       not check the correctness of  the  subsystem  name.  This  has  the  disadvantage  of  not
       detecting  configuration  errors  until the container is started, but has the advantage of
       permitting any future subsystem.

            lxc.cgroup.[subsystem name]
              specify the control group value to be set.  The subsystem name is the literal  name
              of the control group subsystem.  The permitted names and the syntax of their values
              is not dictated by LXC, instead it depends on the  features  of  the  Linux  kernel
              running at the time the container is started, eg. lxc.cgroup.cpuset.cpus

       The capabilities can be dropped in the container if this one is run as root.

              Specify  the  capability  to  be  dropped  in the container. A single line defining
              several capabilities with a space separation is allowed. The format  is  the  lower
              case  of  the  capability  definition without the "CAP_" prefix, eg. CAP_SYS_MODULE
              should be specified as sys_module. See capabilities(7),


       In addition to the few examples  given  below,  you  will  find  some  other  examples  of
       configuration file in /usr/share/doc/lxc/examples

       This  configuration sets up a container to use a veth pair device with one side plugged to
       a bridge br0 (which has been configured before on the system by  the  administrator).  The
       virtual network device visible in the container is renamed to eth0.

            lxc.utsname = myhostname
   = veth
   = up
   = br0
   = eth0
   = 4a:49:43:49:79:bf
   = 2003:db8:1:0:214:1234:fe0b:3597

       This  configuration  will  setup  several  control groups for the application, cpuset.cpus
       restricts usage of the defined cpu, cpus.share prioritize the control group, devices.allow
       makes usable the specified devices.

            lxc.cgroup.cpuset.cpus = 0,1
            lxc.cgroup.cpu.shares = 1234
            lxc.cgroup.devices.deny = a
            lxc.cgroup.devices.allow = c 1:3 rw
            lxc.cgroup.devices.allow = b 8:0 rw

       This  example  show  a  complex  configuration  making  a complex network stack, using the
       control groups, setting a new hostname, mounting some locations and a changing  root  file

            lxc.utsname = complex
   = veth
   = up
   = br0
   = 4a:49:43:49:79:bf
   = 2003:db8:1:0:214:1234:fe0b:3597
   = 2003:db8:1:0:214:5432:feab:3588
   = macvlan
   = up
   = eth0
   = 4a:49:43:49:79:bd
   = 2003:db8:1:0:214:1234:fe0b:3596
   = phys
   = up
   = dummy0
   = 4a:49:43:49:79:ff
   = 2003:db8:1:0:214:1234:fe0b:3297
            lxc.cgroup.cpuset.cpus = 0,1
            lxc.cgroup.cpu.shares = 1234
            lxc.cgroup.devices.deny = a
            lxc.cgroup.devices.allow = c 1:3 rw
            lxc.cgroup.devices.allow = b 8:0 rw
            lxc.mount = /etc/fstab.complex
            lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0
            lxc.rootfs = /mnt/rootfs.complex
            lxc.cap.drop = sys_module mknod setuid net_raw
            lxc.cap.drop = mac_override


       chroot(1), pivot_root(8), fstab(5)


       lxc(1),  lxc-create(1),  lxc-destroy(1),  lxc-start(1),  lxc-stop(1), lxc-execute(1), lxc-
       kill(1), lxc-console(1), lxc-monitor(1), lxc-wait(1), lxc-cgroup(1), lxc-ls(1), lxc-ps(1),
       lxc-info(1), lxc-freeze(1), lxc-unfreeze(1), lxc.conf(5)


       Daniel Lezcano <>

                                          16 April 2012                               LXC.CONF(5)