the various versions of Bacula.
\section{New Features in 7.2.0}
-\section{Bacula Enterprise 8.3}
\subsection{New Job Edit Codes \%E \%R}
In various places such as RunScripts, you have now access to \%E to get the
also possible to manage Snapshots from Bacula's \texttt{bconsole} tool through a
unique interface.
-\subsubsection{Snapshot Backends}
-
-The following Snapshot backends are supported with Bacula:
-
-\begin{itemize}
-\item BTRFS
-\item ZFS
-\item LVM\footnote{Some restrictions described in \vref{LVMBackend} applies to
- the LVM backend}
-\end{itemize}
-
-By default, Snapshots are mounted (or directly available) under
-\textbf{.snapshots} directory on the root filesystem. (On ZFS, the default
-is \textbf{.zfs/snapshots}).
-
-\smallskip{}
-
-The Snapshot backend program is called \textbf{bsnapshot} and is available in
-the \textbf{bacula-enterprise-snapshot} package. In order to use the Snapshot
-Management feature, the package must be installed on the Client.
-
-\smallskip{}
-\label{bsnapshotconf}
-The \textbf{bsnapshot} program can be configured using
-\texttt{/opt/bacula/etc/bsnapshot.conf} file. The following parameters can
-be adjusted in the configuration file:
-
-\begin{itemize}
-\item \texttt{trace=<file>} Specify a trace file
-\item \texttt{debug=<num>} Specify a debug level
-\item \texttt{sudo=<yes/no>} Use sudo to run commands
-\item \texttt{disabled=<yes/no>} Disable snapshot support
-\item \texttt{retry=<num>} Configure the number of retries for some operations
-\item \texttt{snapshot\_dir=<dirname>} Use a custom name for the Snapshot directory. (\textbf{.SNAPSHOT}, \textbf{.snapdir}, etc...)
-\item \texttt{lvm\_snapshot\_size=<lvpath:size>} Specify a custom snapshot size for a given LVM volume
-\end{itemize}
-
-\begin{verbatim}
-# cat /opt/bacula/etc/bsnapshot.conf
-trace=/tmp/snap.log
-debug=10
-lvm_snapshot_size=/dev/ubuntu-vg/root:5%
-\end{verbatim}
-
\subsubsection{Application Quiescing}
When using Snapshots, it is very important to quiesce applications that are
In RunScripts, the \texttt{AfterSnapshot} keyword for the \texttt{RunsWhen} directive will
-allow a command to be run just after the Snapshot creation. \\texttt{AfterSnapshot} is a
+allow a command to be run just after the Snapshot creation. \texttt{AfterSnapshot} is a
synonym for the \texttt{AfterVSS} keyword.
\label{SnapRunScriptExample}
to create Snapshots. The amount of free space required depends on the activity of the
Logical Volume (LV). \textbf{bsnapshot} uses 10\% of the LV by
default. This number can be configured per LV in the
-\textbf{bsnapshot.conf} file (See \vref{bsnapshotconf}).
+\textbf{bsnapshot.conf} file.
-% waa - 20150316 - Not exactly sure why this vgdisplay information is here with no explanation.
\begin{verbatim}
[root@system1]# vgdisplay
--- Volume group ---
Available Space=5.862 GB
\end{verbatim}
+\subsection{Data Encryption Cipher Configuration}
+Bacula Enterprise version 8.0 and later now allows configuration of the data
+encryption cipher and the digest algorithm. Previously, the cipher was forced to AES 128,
+but it is now possible to choose between the following ciphers:
+
+\begin{itemize}
+\item AES128 (default)
+\item AES192
+\item AES256
+\item blowfish
+\end{itemize}
+
+The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL
+options. We advise you to not modify the PkiDigest default setting. Please,
+refer to the OpenSSL documentation to understand the pros and cons regarding these options.
+
+\begin{verbatim}
+ FileDaemon {
+ ...
+ PkiCipher = AES256
+ }
+\end{verbatim}
+
+\subsubsection*{New Option Letter ``M'' for Accurate Directive in FileSet}
+
+% waa - 20150317 - is 8.0.5 correct here?
+Added in version 8.0.5, the new ``M'' option letter for the Accurate directive
+in the FileSet Options block, which allows comparing the modification time and/or
+creation time against the last backup timestamp. This is in contrast to the
+existing options letters ``m'' and/or ``c'', mtime and ctime, which are checked
+against the stored catalog values, which can vary accross different machines
+when using the BaseJob feature.
+
+The advantage of the new ``M'' option letter for Jobs that refer to BaseJobs is
+that it will instruct Bacula to backup files based on the last backup time, which
+is more useful because the mtime/ctime timestamps may differ on various Clients,
+causing files to be needlessly backed up.
+
+\smallskip{}
+
+\begin{verbatim}
+ Job {
+ Name = USR
+ Level = Base
+ FileSet = BaseFS
+...
+ }
+
+ Job {
+ Name = Full
+ FileSet = FullFS
+ Base = USR
+...
+ }
+
+ FileSet {
+ Name = BaseFS
+ Include {
+ Options {
+ Signature = MD5
+ }
+ File = /usr
+ }
+ }
+
+ FileSet {
+ Name = FullFS
+ Include {
+ Options {
+ Accurate = Ms # check for mtime/ctime of last backup timestamp and Size
+ Signature = MD5
+ }
+ File = /home
+ File = /usr
+ }
+ }
+\end{verbatim}
+
+\subsubsection*{New Debug Options}
+
+In Bacula Enterprise version 8.0 and later, we introduced a new \texttt{options} parameter for
+the \texttt{setdebug} bconsole command.
+
+\smallskip{}
+
+The following arguments to the new \texttt{option} parameter are available to control debug functions.
+
+\begin{itemize}
+\item [0] Clear debug flags
+\item [i] Turn off, ignore bwrite() errors on restore on File Daemon
+\item [d] Turn off decomp of BackupRead() streams on File Daemon
+\item [t] Turn on timestamps in traces
+\item [T] Turn off timestamps in traces
+
+% waa - 20150306 - does this "c" item mean to say "Truncate trace file if one exists, otherwise append to it" ???
+\item [c] Truncate trace file if trace file is activated
+
+\item [l] Turn on recoding events on P() and V()
+\item [p] Turn on the display of the event ring when doing a bactrace
+\end{itemize}
+
+\smallskip{}
+
+The following command will enable debugging for the File Daemon, truncate an existing trace file,
+and turn on timestamps when writing to the trace file.
+
+\begin{verbatim}
+* setdebug level=10 trace=1 options=ct fd
+\end{verbatim}
+
+\smallskip{}
+
+It is now possible to use a \textsl{class} of debug messages called \texttt{tags}
+to control the debug output of Bacula daemons.
+
+\begin{itemize}
+\item [all] Display all debug messages
+\item [bvfs] Display BVFS debug messages
+\item [sql] Display SQL related debug messages
+\item [memory] Display memory and poolmem allocation messages
+\item [scheduler] Display scheduler related debug messages
+\end{itemize}
+
+\begin{verbatim}
+* setdebug level=10 tags=bvfs,sql,memory
+* setdebug level=10 tags=!bvfs
+
+# bacula-dir -t -d 200,bvfs,sql
+\end{verbatim}
+
+The \texttt{tags} option is composed of a list of tags. Tags are separated by
+``,'' or ``+'' or ``-'' or ``!''. To disable a specific tag, use ``-'' or ``!''
+in front of the tag. Note that more tags are planned for future versions.
+
+%%\LTXtable{\linewidth}{table_debugtags}
+
+\subsection{Read Only Storage Devices}
+This version of Bacula allows you to define a Storage deamon device
+to be read-only. If the {\bf Read Only} directive is specified and
+enabled, the drive can only be used for read operations.
+The {\bf Read Only} directive can be defined in any bacula-sd.conf
+Device resource, and is most useful for reserving one or more
+drives for restores. An example is:
+
+\begin{verbatim}
+Read Only = yes
+\end{verbatim}
+
+\subsection{Catalog Performance Improvements}
+There is a new Bacula database format (schema) in this version
+of Bacula that eliminates the FileName table by placing the
+Filename into the File record of the File table.
+This substantiallly improves performance,
+particularly for large (1GB or greater) databases.
+
+% waa - 20150317 - Is 1GB _really_ considered to be a large database? Do we mean to say 100GB??
+
+The \texttt{update\_xxx\_catalog} script will automatically update the
+Bacula database format, but you should realize that for
+very large databases (greater than 1GB), it may take some
+time, and there are several different options for doing the
+update: 1. Shudown the database and update it. 2. Update the
+database while production jobs are running. See the Bacula Systems
+White Paper ``Migration-to-6.6'' on this subject.
+
+\smallskip
+This database format change can provide very significant improvements in
+the speed of metadata insertion into the database, and in some cases
+(backup of large email servers) can significantly reduce the size of the
+database.
+
+\subsection{New Truncate Command}
+We have added a new truncate command to bconsole which
+will truncate a volume if the volume is purged, and if
+the volume is also marked {\bf Action On Purge = Truncate}.
+This feature was originally added in Bacula version 5.0.1,
+but the mechanism for actually doing the truncate required
+the user to enter a complicated command such as:
+
+\begin{verbatim}
+purge volume action=truncate storage=File pool=Default
+\end{verbatim}
+
+The above command is now simplified to be:
+
+\begin{verbatim}
+truncate storage=File pool=Default
+\end{verbatim}
+
+\subsection{New Resume Command}
+The new \texttt{resume} command does exactly the same thing as a
+{\bf restart} command, but for some users the
+name may be more logical because in general the
+{\bf restart} command is used to resume running
+a Job that was incomplete.
+
+\subsection{New Prune ``Expired'' Volume Command}
+In Bacula Enterprise 6.4, it is now possible to prune all volumes
+(from a pool, or globally) that are ``expired''. This option can be
+scheduled after or before the backup of the catalog and can be
+combined with the \texttt{Truncate On Purge} option. The \texttt{prune expired volme} command may
+be used instead of the \texttt{manual\_prune.pl} script.
+
+\begin{verbatim}
+* prune expired volume
+
+* prune expired volume pool=FullPool
+\end{verbatim}
+
+To schedule this option automatically, it can be added to the Catalog backup job
+definition.
+
+\begin{verbatim}
+ Job {
+ Name = CatalogBackup
+ ...
+ RunScript {
+ Console = "prune expired volume yes"
+ RunsWhen = Before
+ }
+ }
+\end{verbatim}
+
+
+\subsection{New Job Edit Codes \%P \%C}
+In various places such as RunScripts, you have now access to \%P to get the
+current Bacula process ID (PID) and \%C to know if the current job is a
+cloned job.
+
+\subsection{Enhanced Status and Error Messages}
+We have enhanced the Storage daemon status output to be more
+readable. This is important when there are a large number of
+devices. In addition to formatting changes, it also includes more
+details on which devices are reading and writing.
+
+A number of error messages have been enhanced to have more specific
+data on what went wrong.
+
+If a file changes size while being backed up the old and new size
+are reported.
+
+\subsection{Miscellaneous New Features}
+\begin{itemize}
+\item Allow unlimited line lengths in .conf files (previously limited
+to 2000 characters).
+
+\item Allow /dev/null in ChangerCommand to indicated a Virtual Autochanger.
+
+\item Add a --fileprune option to the manual\_prune.pl script.
+
+\item Add a -m option to make\_catalog\_backup.pl to do maintenance
+on the catalog.
+
+\item Safer code that cleans up the working directory when starting
+the daemons. It limits what files can be deleted, hence enhances
+security.
+
+\item Added a new .ls command in bconsole to permit browsing a client's
+filesystem.
+
+\item Fixed a number of bugs, includes some obscure seg faults, and a
+race condition that occurred infrequently when running Copy, Migration,
+or Virtual Full backups.
+
+\item Upgraded to a newer version of Qt4 for bat. All indications
+are that this will improve bat's stability on Windows machines.
+
+\item The Windows installers now detect and refuse to install on
+an OS that does not match the 32/64 bit value of the installer.
+\end{itemize}
+
+\subsection{FD Storage Address}
+
+When the Director is behind a NAT, in a WAN area, to connect to
+% the FileDaemon or
+the StorageDaemon, the Director uses an ``external'' ip address,
+and the FileDaemon should use an ``internal'' IP address to contact the
+StorageDaemon.
+
+The normal way to handle this situation is to use a canonical name such as
+``storage-server'' that will be resolved on the Director side as the WAN
+address and on the Client side as the LAN address. This is now possible to
+configure this parameter using the new directive \texttt{FDStorageAddress} in
+the Storage or Client resource.
+
+
+%%\bsysimageH{BackupOverWan1}{Backup Over WAN}{figbs6:fdstorageaddress}
+% \label{fig:fdstorageaddress}
+
+\begin{verbatim}
+Storage {
+ Name = storage1
+ Address = 65.1.1.1
+ FD Storage Address = 10.0.0.1
+ SD Port = 9103
+ ...
+}
+\end{verbatim}
+
+% # or in the Client resouce
+%
+
+\begin{verbatim}
+ Client {
+ Name = client1
+ Address = 65.1.1.2
+ FD Storage Address = 10.0.0.1
+ FD Port = 9102
+ ...
+ }
+\end{verbatim}
+
+Note that using the Client \texttt{FDStorageAddress} directive will not allow
+to use multiple Storage Daemon, all Backup or Restore requests will be sent to
+the specified \texttt{FDStorageAddress}.
+
+\subsection{Maximum Concurrent Read Jobs}
+This is a new directive that can be used in the {\bf bacula-dir.conf} file
+in the Storage resource. The main purpose is to limit the number
+of concurrent Copy, Migration, and VirtualFull jobs so that
+they don't monopolize all the Storage drives causing a deadlock situation
+where all the drives are allocated for reading but none remain for
+writing. This deadlock situation can occur when running multiple
+simultaneous Copy, Migration, and VirtualFull jobs.
+
+\smallskip
+The default value is set to 0 (zero), which means there is no
+limit on the number of read jobs. Note, limiting the read jobs
+does not apply to Restore jobs, which are normally started by
+hand. A reasonable value for this directive is one half the number
+of drives that the Storage resource has rounded down. Doing so,
+will leave the same number of drives for writing and will generally
+avoid over committing drives and a deadlock.
+
+\subsection{Incomplete Jobs}
+During a backup, if the Storage daemon experiences disconnection
+with the File daemon during backup (normally a comm line problem
+or possibly an FD failure), under conditions that the SD determines
+to be safe it will make the failed job as Incomplete rather than
+failed. This is done only if there is sufficient valid backup
+data that was written to the Volume. The advantage of an Incomplete
+job is that it can be restarted by the new bconsole {\bf restart}
+command from the point where it left off rather than from the
+beginning of the jobs as is the case with a cancel.
+
+\subsection{The Stop Command}
+Bacula has been enhanced to provide a {\bf stop} command,
+very similar to the {\bf cancel} command with the main difference
+that the Job that is stopped is marked as Incomplete so that
+it can be restarted later by the {\bf restart} command where
+it left off (see below). The {\bf stop} command with no
+arguments, will like the cancel command, prompt you with the
+list of running jobs allowing you to select one, which might
+look like the following:
+
+\begin{verbatim}
+*stop
+Select Job:
+ 1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07
+ 2: JobId=4 Job=Incremental.2012-03-26_12.04.30_08
+ 3: JobId=5 Job=Incremental.2012-03-26_12.04.36_09
+Choose Job to stop (1-3): 2
+2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped.
+3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped.
+\end{verbatim}
+
+\subsection{The Restart Command}
+The new {\bf Restart command} allows console users to restart
+a canceled, failed, or incomplete Job. For canceled and failed
+Jobs, the Job will restart from the beginning. For incomplete
+Jobs the Job will restart at the point that it was stopped either
+by a stop command or by some recoverable failure.
+
+\smallskip
+If you enter the {\bf restart} command in bconsole, you will get the
+following prompts:
+
+\begin{verbatim}
+*restart
+You have the following choices:
+ 1: Incomplete
+ 2: Canceled
+ 3: Failed
+ 4: All
+Select termination code: (1-4):
+\end{verbatim}
+
+If you select the {\bf All} option, you may see something like:
+
+\begin{verbatim}
+Select termination code: (1-4): 4
++-------+-------------+---------------------+------+-------+----------+-----------+-----------+
+| jobid | name | starttime | type | level | jobfiles |
+jobbytes | jobstatus |
++-------+-------------+---------------------+------+-------+----------+-----------+-----------+
+| 1 | Incremental | 2012-03-26 12:15:21 | B | F | 0 |
+ 0 | A |
+| 2 | Incremental | 2012-03-26 12:18:14 | B | F | 350 |
+4,013,397 | I |
+| 3 | Incremental | 2012-03-26 12:18:30 | B | F | 0 |
+ 0 | A |
+| 4 | Incremental | 2012-03-26 12:18:38 | B | F | 331 |
+3,548,058 | I |
++-------+-------------+---------------------+------+-------+----------+-----------+-----------+
+Enter the JobId list to select:
+\end{verbatim}
+
+Then you may enter one or more JobIds to be restarted, which may
+take the form of a list of JobIds separated by commas, and/or JobId
+ranges such as {\bf 1-4}, which indicates you want to restart JobIds
+1 through 4, inclusive.
+
+\subsection{Job Bandwidth Limitation}
+
+The new {\bf Job Bandwidth Limitation} directive may be added to the File
+daemon's and/or Director's configuration to limit the bandwidth used by a
+Job on a Client. It can be set in the File daemon's conf file for all Jobs
+run in that File daemon, or it can be set for each Job in the Director's
+conf file. The speed is always specified in bytes per second.
+
+For example:
+\begin{verbatim}
+FileDaemon {
+ Name = localhost-fd
+ Working Directory = /some/path
+ Pid Directory = /some/path
+ ...
+ Maximum Bandwidth Per Job = 5Mb/s
+}
+\end{verbatim}
+
+The above example would cause any jobs running with the FileDaemon to not
+exceed 5 megabytes per second of throughput when sending data to the
+Storage Daemon. Note, the speed is always specified in bytes per second
+(not in bits per second), and the case (upper/lower) of the specification
+characters is ignored (i.e. 1MB/s = 1Mb/s).
+
+You may specify the following speed parameter modifiers:
+ k/s (1,000 bytes per second), kb/s (1,024 bytes per second),
+ m/s (1,000,000 bytes per second), or mb/s (1,048,576 bytes per second).
+
+For example:
+\begin{verbatim}
+Job {
+ Name = locahost-data
+ FileSet = FS_localhost
+ Accurate = yes
+ ...
+ Maximum Bandwidth = 5Mb/s
+ ...
+}
+\end{verbatim}
+
+The above example would cause Job \texttt{localhost-data} to not exceed 5MB/s
+of throughput when sending data from the File daemon to the Storage daemon.
+
+A new console command \texttt{setbandwidth} permits to set dynamically the
+maximum throughput of a running Job or for future jobs of a Client.
+
+\begin{verbatim}
+* setbandwidth limit=1000 jobid=10
+\end{verbatim}
+
+Please note that the value specified for the \texttt{limit} command
+line parameter is always in units of 1024 bytes (i.e. the number
+is multiplied by 1024 to give the number of bytes per second). As
+a consequence, the above limit of 1000 will be interpreted as a
+limit of 1000 * 1024 = 1,024,000 bytes per second.
+
+\subsection{Always Backup a File}
+
+When the Accurate mode is turned on, you can decide to always backup a file
+by using then new {\bf A} Accurate option in your FileSet. For example:
+
+\begin{verbatim}
+Job {
+ Name = ...
+ FileSet = FS_Example
+ Accurate = yes
+ ...
+}
+
+FileSet {
+ Name = FS_Example
+ Include {
+ Options {
+ Accurate = A
+ }
+ File = /file
+ File = /file2
+ }
+ ...
+}
+\end{verbatim}
+
+This project was funded by Bacula Systems based on an idea of James Harper and
+is available with the Bacula Enterprise Edition.
+
+\subsection{Setting Accurate Mode at Runtime}
+
+You are now able to specify the Accurate mode on the \texttt{run} command and
+in the Schedule resource.
+
+\begin{verbatim}
+* run accurate=yes job=Test
+\end{verbatim}
+
+\begin{verbatim}
+Schedule {
+ Name = WeeklyCycle
+ Run = Full 1st sun at 23:05
+ Run = Differential accurate=yes 2nd-5th sun at 23:05
+ Run = Incremental accurate=no mon-sat at 23:05
+}
+\end{verbatim}
+
+It can allow you to save memory and and CPU resources on the catalog server in
+some cases.
+
+\medskip
+These advanced tuning options are available with the Bacula Enterprise Edition.
+
+% Common with community
+\subsection{Additions to RunScript variables}
+You can have access to JobBytes, JobFiles and Director name using \%b, \%F and \%D
+in your runscript command. The Client address is now available through \%h.
+
+\begin{verbatim}
+RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D"
+\end{verbatim}
+
+\subsection{LZO Compression}
+
+LZO compression was added in the Unix File Daemon. From the user point of view,
+it works like the GZIP compression (just replace {\bf compression=GZIP} with
+{\bf compression=LZO}).
+
+For example:
+\begin{verbatim}
+Include {
+ Options { compression=LZO }
+ File = /home
+ File = /data
+}
+\end{verbatim}
+
+LZO provides much faster compression and decompression speed but lower
+compression ratio than GZIP. It is a good option when you backup to disk. For
+tape, the built-in compression may be a better option.
+
+LZO is a good alternative for GZIP1 when you don't want to slow down your
+backup. On a modern CPU it should be able to run almost as fast as:
+
+\begin{itemize}
+\item your client can read data from disk. Unless you have very fast disks like
+ SSD or large/fast RAID array.
+\item the data transfers between the file daemon and the storage daemon even on
+ a 1Gb/s link.
+\end{itemize}
+
+Note that bacula only use one compression level LZO1X-1.
+
+\medskip
+The code for this feature was contributed by Laurent Papier.
+
+\subsection{Purge Migration Job}
+
+The new {\bf Purge Migration Job} directive may be added to the Migration
+Job definition in the Director's configuration file. When it is enabled
+the Job that was migrated during a migration will be purged at
+the end of the migration job.
+
+For example:
+\begin{verbatim}
+Job {
+ Name = "migrate-job"
+ Type = Migrate
+ Level = Full
+ Client = localhost-fd
+ FileSet = "Full Set"
+ Messages = Standard
+ Storage = DiskChanger
+ Pool = Default
+ Selection Type = Job
+ Selection Pattern = ".*Save"
+...
+ Purge Migration Job = yes
+}
+\end{verbatim}
+
+\medskip
+
+This project was submitted by Dunlap Blake; testing and documentation was funded
+by Bacula Systems.
+
+\subsection{Changes in the Pruning Algorithm}
+
+We rewrote the job pruning algorithm in this version. Previously, in some users
+reported that the pruning process at the end of jobs was very long. It should
+not be longer the case. Now, Bacula won't prune automatically a Job if this
+particular Job is needed to restore data. Example:
+
+\begin{verbatim}
+JobId: 1 Level: Full
+JobId: 2 Level: Incremental
+JobId: 3 Level: Incremental
+JobId: 4 Level: Differential
+.. Other incrementals up to now
+\end{verbatim}
+
+In this example, if the Job Retention defined in the Pool or in the Client
+resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will
+detect that JobId 1 and 4 are essential to restore data at the current state
+and will prune only JobId 2 and 3.
+
+\texttt{Important}, this change affect only the automatic pruning step after a
+Job and the \texttt{prune jobs} Bconsole command. If a volume expires after the
+\texttt{VolumeRetention} period, important jobs can be pruned.
+
+\subsection{Ability to Verify any specified Job}
+You now have the ability to tell Bacula which Job should verify instead of
+automatically verify just the last one.
+
+This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.
+
+To verify a given job, just specify the Job jobid in argument when starting the
+job.
+\begin{verbatim}
+*run job=VerifyVolume jobid=1 level=VolumeToCatalog
+Run Verify job
+JobName: VerifyVolume
+Level: VolumeToCatalog
+Client: 127.0.0.1-fd
+FileSet: Full Set
+Pool: Default (From Job resource)
+Storage: File (From Job resource)
+Verify Job: VerifyVol.2010-09-08_14.17.17_03
+Verify List: /tmp/regress/working/VerifyVol.bsr
+When: 2010-09-08 14:17:31
+Priority: 10
+OK to run? (yes/mod/no):
+\end{verbatim}
+
+
\chapter{New Features in 7.0.0}
This chapter presents the new features that have been added to