pax_global_header00006660000000000000000000000064133172172510014514gustar00rootroot0000000000000052 comment=323acf2c43926d6cb9bf035d347406cbe78a13f4 tuned-2.10.0/000077500000000000000000000000001331721725100127135ustar00rootroot00000000000000tuned-2.10.0/.gitignore000066400000000000000000000000371331721725100147030ustar00rootroot00000000000000*.pyc *.pyo tuned-*.tar.bz2 *~ tuned-2.10.0/00_tuned000066400000000000000000000020231331721725100142510ustar00rootroot00000000000000#! /bin/sh set -e # grub-mkconfig helper script. # Copyright (C) 2014 Red Hat, Inc # Author: Jaroslav Škarvada # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # tunedcfgdir=/etc/tuned tuned_bootcmdline_file=$tunedcfgdir/bootcmdline . $tuned_bootcmdline_file echo "set tuned_params=\"$TUNED_BOOT_CMDLINE\"" echo "set tuned_initrd=\"$TUNED_BOOT_INITRD_ADD\"" tuned-2.10.0/AUTHORS000066400000000000000000000006131331721725100137630ustar00rootroot00000000000000Maintainers: - Jaroslav Škarvada Based on old tuned/ktune code by: - Philip Knirsch - Thomas Woerner Significant contributors: - Jan Včelák - Jan Kaluža Other contributors: - Petr Lautrbach - Marcela Mašláňová - Jarod Wilson - Jan Hutař - Arnaldo Carvalho de Melo - perf code for plugin_scheduler Icon: - Mariia Leonova tuned-2.10.0/COPYING000066400000000000000000000432541331721725100137560ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. tuned-2.10.0/INSTALL000066400000000000000000000007131331721725100137450ustar00rootroot00000000000000Installation instructions ************************* The tuned daemon is written in pure Python. Nothing requires to be built. For installation use 'make install'. Optionally DESTDIR can be appended. By default, the tuned modules are installed to the Python3 destination (e.g. /usr/lib/python3.6/site-packages/) and shebangs in executable Python files are modified to use Python3. If you want tuned to use Python2 instead, use 'make PYTHON=python2 install'. tuned-2.10.0/Makefile000066400000000000000000000206171331721725100143610ustar00rootroot00000000000000NAME = tuned # set to devel for nightly GIT snapshot BUILD = release # which config to use in mock-build target MOCK_CONFIG = rhel-7-x86_64 # scratch-build for triggering Jenkins SCRATCH_BUILD_TARGET = rhel-7.5-candidate VERSION = $(shell awk '/^Version:/ {print $$2}' tuned.spec) GIT_DATE = $(shell date +'%Y%m%d') ifeq ($(BUILD), release) RPM_ARGS += --without snapshot MOCK_ARGS += --without=snapshot RPM_VERSION = $(NAME)-$(VERSION)-1 else RPM_ARGS += --with snapshot MOCK_ARGS += --with=snapshot GIT_SHORT_COMMIT = $(shell git rev-parse --short=8 --verify HEAD) GIT_SUFFIX = $(GIT_DATE)git$(GIT_SHORT_COMMIT) GIT_PSUFFIX = .$(GIT_SUFFIX) RPM_VERSION = $(NAME)-$(VERSION)-1$(GIT_PSUFFIX) endif UNITDIR_FALLBACK = /usr/lib/systemd/system UNITDIR_DETECT = $(shell pkg-config systemd --variable systemdsystemunitdir || rpm --eval '%{_unitdir}' 2>/dev/null || echo $(UNITDIR_FALLBACK)) UNITDIR = $(UNITDIR_DETECT:%{_unitdir}=$(UNITDIR_FALLBACK)) TMPFILESDIR_FALLBACK = /usr/lib/tmpfiles.d TMPFILESDIR_DETECT = $(shell pkg-config systemd --variable tmpfilesdir || rpm --eval '%{_tmpfilesdir}' 2>/dev/null || echo $(TMPFILESDIR_FALLBACK)) TMPFILESDIR = $(TMPFILESDIR_DETECT:%{_tmpfilesdir}=$(TMPFILESDIR_FALLBACK)) VERSIONED_NAME = $(NAME)-$(VERSION)$(GIT_PSUFFIX) SYSCONFDIR = /etc DATADIR = /usr/share DOCDIR = $(DATADIR)/doc/$(NAME) PYTHON = python3 PYLINT = pylint-3 ifeq ($(PYTHON),python2) PYLINT = pylint-2 endif SHEBANG_REWRITE_REGEX= '1s/^(\#!\/usr\/bin\/)\/\1$(PYTHON)/' PYTHON_SITELIB = $(shell $(PYTHON) -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib());') ifeq ($(PYTHON_SITELIB),) $(error Failed to determine python library directory) endif TUNED_PROFILESDIR = /usr/lib/tuned TUNED_RECOMMEND_DIR = $(TUNED_PROFILESDIR)/recommend.d TUNED_USER_RECOMMEND_DIR = $(SYSCONFDIR)/tuned/recommend.d BASH_COMPLETIONS = $(DATADIR)/bash-completion/completions copy_executable = install -Dm 0755 $(1) $(2) rewrite_shebang = sed -i -r -e $(SHEBANG_REWRITE_REGEX) $(1) restore_timestamp = touch -r $(1) $(2) install_python_script = $(call copy_executable,$(1),$(2)) \ && $(call rewrite_shebang,$(2)) && $(call restore_timestamp,$(1),$(2)); release-dir: mkdir -p $(VERSIONED_NAME) release-cp: release-dir cp -a AUTHORS COPYING INSTALL README $(VERSIONED_NAME) cp -a tuned.py tuned.spec tuned.service tuned.tmpfiles Makefile tuned-adm.py \ tuned-adm.bash dbus.conf recommend.conf tuned-main.conf 00_tuned \ bootcmdline modules.conf com.redhat.tuned.policy \ com.redhat.tuned.gui.policy tuned-gui.py tuned-gui.glade \ tuned-gui.desktop $(VERSIONED_NAME) cp -a doc experiments libexec man profiles systemtap tuned contrib icons \ $(VERSIONED_NAME) archive: clean release-cp tar czf $(VERSIONED_NAME).tar.gz $(VERSIONED_NAME) rpm-build-dir: mkdir rpm-build-dir srpm: archive rpm-build-dir rpmbuild --define "_sourcedir `pwd`/rpm-build-dir" --define "_srcrpmdir `pwd`/rpm-build-dir" \ --define "_specdir `pwd`/rpm-build-dir" --nodeps $(RPM_ARGS) -ts $(VERSIONED_NAME).tar.gz rpm: archive rpm-build-dir rpmbuild --define "_sourcedir `pwd`/rpm-build-dir" --define "_srcrpmdir `pwd`/rpm-build-dir" \ --define "_specdir `pwd`/rpm-build-dir" --nodeps $(RPM_ARGS) -tb $(VERSIONED_NAME).tar.gz clean-mock-result-dir: rm -f mock-result-dir/* mock-result-dir: mkdir mock-result-dir # delete RPM files older than cca. one week if total space occupied is more than 5 MB tidy-mock-result-dir: mock-result-dir if [ `du -bs mock-result-dir | tail -n 1 | cut -f1` -gt 5000000 ]; then \ rm -f `find mock-result-dir -name '*.rpm' -mtime +7`; \ fi mock-build: srpm mock -r $(MOCK_CONFIG) $(MOCK_ARGS) --resultdir=`pwd`/mock-result-dir `ls rpm-build-dir/*$(RPM_VERSION).*.src.rpm | head -n 1`&& \ rm -f mock-result-dir/*.log mock-devel-build: srpm mock -r $(MOCK_CONFIG) --with=snapshot \ --define "git_short_commit `if [ -n \"$(GIT_SHORT_COMMIT)\" ]; then echo $(GIT_SHORT_COMMIT); else git rev-parse --short=8 --verify HEAD; fi`" \ --resultdir=`pwd`/mock-result-dir `ls rpm-build-dir/*$(RPM_VERSION).*.src.rpm | head -n 1` && \ rm -f mock-result-dir/*.log createrepo: mock-devel-build createrepo mock-result-dir # scratch build to triggering Jenkins scratch-build: mock-devel-build brew build --scratch --nowait $(SCRATCH_BUILD_TARGET) `ls mock-result-dir/*$(GIT_DATE)git*.*.src.rpm | head -n 1` nightly: tidy-mock-result-dir createrepo scratch-build rsync -ave ssh --delete --progress mock-result-dir/ jskarvad@fedorapeople.org:/home/fedora/jskarvad/public_html/tuned/devel/repo/ install-dirs: mkdir -p $(DESTDIR)$(PYTHON_SITELIB) mkdir -p $(DESTDIR)$(TUNED_PROFILESDIR) mkdir -p $(DESTDIR)/var/lib/tuned mkdir -p $(DESTDIR)/var/log/tuned mkdir -p $(DESTDIR)/run/tuned mkdir -p $(DESTDIR)$(DOCDIR) mkdir -p $(DESTDIR)$(SYSCONFDIR) mkdir -p $(DESTDIR)$(TUNED_RECOMMEND_DIR) mkdir -p $(DESTDIR)$(TUNED_USER_RECOMMEND_DIR) install: install-dirs # library cp -a tuned $(DESTDIR)$(PYTHON_SITELIB) # binaries $(call install_python_script,tuned.py,$(DESTDIR)/usr/sbin/tuned) $(call install_python_script,tuned-adm.py,$(DESTDIR)/usr/sbin/tuned-adm) $(call install_python_script,tuned-gui.py,$(DESTDIR)/usr/sbin/tuned-gui) $(foreach file, diskdevstat netdevstat scomes, \ install -Dpm 0755 systemtap/$(file) $(DESTDIR)/usr/sbin/$(notdir $(file));) $(call install_python_script, \ systemtap/varnetload, $(DESTDIR)/usr/sbin/varnetload) # glade install -Dpm 0644 tuned-gui.glade $(DESTDIR)$(DATADIR)/tuned/ui/tuned-gui.glade # tools $(call install_python_script, \ experiments/powertop2tuned.py, $(DESTDIR)/usr/bin/powertop2tuned) # configuration files install -Dpm 0644 tuned-main.conf $(DESTDIR)$(SYSCONFDIR)/tuned/tuned-main.conf # None profile in the moment, autodetection will be used echo -n > $(DESTDIR)$(SYSCONFDIR)/tuned/active_profile echo -n > $(DESTDIR)$(SYSCONFDIR)/tuned/profile_mode install -Dpm 0644 bootcmdline $(DESTDIR)$(SYSCONFDIR)/tuned/bootcmdline install -Dpm 0644 modules.conf $(DESTDIR)$(SYSCONFDIR)/modprobe.d/tuned.conf # profiles & system config cp -a profiles/* $(DESTDIR)$(TUNED_PROFILESDIR)/ mv $(DESTDIR)$(TUNED_PROFILESDIR)/realtime/realtime-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/realtime-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/realtime-virtual-guest/realtime-virtual-guest-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/realtime-virtual-guest-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/realtime-virtual-host/realtime-virtual-host-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/realtime-virtual-host-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/cpu-partitioning/cpu-partitioning-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/cpu-partitioning-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/sap-hana-vmware/sap-hana-vmware-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/sap-hana-vmware-variables.conf install -pm 0644 recommend.conf $(DESTDIR)$(TUNED_RECOMMEND_DIR)/50-tuned.conf # bash completion install -Dpm 0644 tuned-adm.bash $(DESTDIR)$(BASH_COMPLETIONS)/tuned-adm # runtime directory install -Dpm 0644 tuned.tmpfiles $(DESTDIR)$(TMPFILESDIR)/tuned.conf # systemd units install -Dpm 0644 tuned.service $(DESTDIR)$(UNITDIR)/tuned.service # dbus configuration install -Dpm 0644 dbus.conf $(DESTDIR)$(SYSCONFDIR)/dbus-1/system.d/com.redhat.tuned.conf # grub template install -Dpm 0755 00_tuned $(DESTDIR)$(SYSCONFDIR)/grub.d/00_tuned # polkit configuration install -Dpm 0644 com.redhat.tuned.policy $(DESTDIR)$(DATADIR)/polkit-1/actions/com.redhat.tuned.policy install -Dpm 0644 com.redhat.tuned.gui.policy $(DESTDIR)$(DATADIR)/polkit-1/actions/com.redhat.tuned.gui.policy # manual pages $(foreach man_section, 5 7 8, $(foreach file, $(wildcard man/*.$(man_section)), \ install -Dpm 0644 $(file) $(DESTDIR)$(DATADIR)/man/man$(man_section)/$(notdir $(file));)) # documentation cp -a doc/* $(DESTDIR)$(DOCDIR) cp AUTHORS COPYING README $(DESTDIR)$(DOCDIR) # libexec scripts $(foreach file, $(wildcard libexec/*), \ $(call install_python_script, \ $(file), $(DESTDIR)/usr/libexec/tuned/$(notdir $(file)))) # icon install -Dpm 0644 icons/tuned.svg $(DESTDIR)$(DATADIR)/icons/hicolor/scalable/apps/tuned.svg # desktop file install -dD $(DESTDIR)$(DATADIR)/applications desktop-file-install --dir=$(DESTDIR)$(DATADIR)/applications tuned-gui.desktop clean: find -name "*.pyc" | xargs rm -f rm -rf $(VERSIONED_NAME) rpm-build-dir test: $(PYTHON) -m unittest discover tests lint: $(PYLINT) -E -f parseable tuned *.py .PHONY: clean archive srpm tag test lint tuned-2.10.0/README000066400000000000000000000065411331721725100136010ustar00rootroot00000000000000Tuned: Daemon for monitoring and adaptive tuning of system devices. (This is tuned 2.0 with a new code base. If you are looking for the older version, please check out branch '1.0' in our Git repository.) How to use it ------------- In Fedora, Red Hat Enterprise Linux, and their derivates install tuned package (optionally tuned-utils, tuned-utils-systemtap, and tuned-profiles-compat): # yum install tuned After the installation, start the tuned service: # systemctl start tuned You might also want to run tuned whenever your machine starts: # systemctl enable tuned If the daemon is running you can easily control it using 'tuned-adm' command line utility. This tool communicates with the daemon over DBus. Any user can list the available profiles and see which one is active. But the profiles can be switched only by root user or by any user with physical console allocated on the machine (X11, physical tty, but no SSH). To see the current active profile, run: $ tuned-adm active To list all available profiles, run: $ tuned-adm list To switch to a different profile, run: # tuned-adm profile Your profile choice is also written into /etc/tuned/active_profile and this choice is used when the daemon is restarted (e.g. with the machine reboot). To disable all tunings, run: # tuned-adm off # tuned-adm recommend Recommend profile suitable for your system. Currently only static detection is implemented - it decides according to data in /etc/system-release-cpe and virt-what output. The rules for autodetection are defined in the file /usr/lib/tuned/recommend.d/50-tuned.conf. They can be overridden by the user by putting a file to /etc/tuned/recommend.d or a file named recommend.conf into /etc/tuned (see tuned-adm(8) for more details). The default rules recommend profiles targeted to the best performance or the balanced profile if unsure. Available tunings ----------------- We are currenlty working on many new tuning features. Some are described in the manual pages, some are yet undocumented. Authors ------- The best way to contact the authors of the project is to use our mailing list: power-management@lists.fedoraproject.org In case you want to contact individual author, you will find the e-mail address in every commit message in our Git repository: https://github.com/redhat-performance/tuned.git You can also join #fedora-power IRC channel on Freenode. License ------- Copyright (C) 2008-2016 Red Hat, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Full text of the license is enclosed in COPYING file. The icon: The Tuned icon was created by Mariia Leonova and it is licensed under Creative Commons Attribution-ShareAlike 3.0 license (http://creativecommons.org/licenses/by-sa/3.0/legalcode). tuned-2.10.0/TODO000066400000000000000000000010401331721725100133760ustar00rootroot00000000000000* Implement a configurable policy which determines how many (and how big) log buffers can a user with a given UID create using the log_capture_start DBus call. * Destroy the log handler created by log_capture_start() when the caller disconnects from DBus. * Use only one timer for destroying log handlers at a time. Create a new timer as necessary when the old timer fires. * Handle signals in tuned-adm so that the log_capture_finish() DBus call is called even if we receive a signal. This may require some rewriting of tuned-adm. tuned-2.10.0/bootcmdline000066400000000000000000000021271331721725100151370ustar00rootroot00000000000000# This file specifies additional parameters to kernel boot command line and # initrd overlay images. Its content is set by the Tuned bootloader plugin # and sourced by the grub2-mkconfig (/etc/grub.d/00_tuned script). # # Please do not edit this file. Content of this file can be overwritten by # switch of Tuned profile. # # If you need to add parameters to the kernel boot command line, create # Tuned profile containing the following: # # [bootloader] # cmdline = YOUR_ADDITIONAL_KERNEL_PARAMETERS # # Then switch to your newly created profile by: # # tuned-adm profile YOUR_NEW_PROFILE # # Your current grub2 config will be patched, but you can also # regenerate it anytime by: # # grub2-mkconfig -o /boot/grub2/grub.cfg # # YOUR_ADDITIONAL_KERNEL_PARAMETERS will stay preserved. # # Similarly if you need to add initrd overlay image, create Tuned profile # containing the following: # # [bootloader] # initrd_add_img = INITRD_OVERLAY_IMAGE # # or to generate initrd overlay image from the directory: # # [bootloader] # initrd_add_dir = INITRD_OVERLAY_DIRECTORY TUNED_BOOT_CMDLINE= TUNED_BOOT_INITRD_ADD= tuned-2.10.0/com.redhat.tuned.gui.policy000066400000000000000000000013311331721725100200570ustar00rootroot00000000000000 Run tuned-gui as root Authentication is required to run tuned-gui auth_admin auth_admin auth_admin /usr/sbin/tuned-gui true tuned-2.10.0/com.redhat.tuned.policy000066400000000000000000000141601331721725100173000ustar00rootroot00000000000000 Tuned https://fedorahosted.org/tuned/ tuned Show active profile Authentication is required to show active profile yes yes yes Show current profile selection mode Authentication is required to show current profile selection mode yes yes yes Disable Tuned Authentication is required to disable Tuned auth_admin auth_admin yes Check whether Tuned is running Authentication is required to check whether Tuned is running yes yes yes Show information about Tuned profile Authentication is required to show information about Tuned profile yes yes yes List Tuned profiles Authentication is required to list Tuned profiles yes yes yes List Tuned profiles Authentication is required to list Tuned profiles yes yes yes Show Tuned profile name which is recommended for your system Authentication is required to show recommended profile name yes yes yes Reload Tuned configuration Authentication is required to reload Tuned configuration auth_admin auth_admin yes Start Tuned daemon Authentication is required to start Tuned daemon auth_admin auth_admin yes Stop Tuned daemon Authentication is required to stop Tuned daemon auth_admin auth_admin yes Switch Tuned profile Authentication is required to switch Tuned profile auth_admin auth_admin yes Start log capture Authentication is required to start log capture auth_admin auth_admin yes Stop log capture and return captured log Authentication is required to stop log capture auth_admin auth_admin yes Enable automatic profile selection mode Authentication is required to change profile selection mode auth_admin auth_admin yes Verify Tuned profile Authentication is required to verify Tuned profile yes yes yes Verify Tuned profile, ignore missing values Authentication is required to verify Tuned profile yes yes yes tuned-2.10.0/contrib/000077500000000000000000000000001331721725100143535ustar00rootroot00000000000000tuned-2.10.0/contrib/sysvinit/000077500000000000000000000000001331721725100162435ustar00rootroot00000000000000tuned-2.10.0/contrib/sysvinit/tuned000077500000000000000000000041731331721725100173150ustar00rootroot00000000000000#!/bin/sh ### BEGIN INIT INFO # Provides: tuned # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Should-Start: $portmap # Should-Stop: $portmap # X-Start-Before: nis # X-Stop-After: nis # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # X-Interactive: false # Short-Description: Tuned daemon # Description: Dynamic System Tuning Daemon ### END INIT INFO # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin DESC="System tuning daemon" NAME=tuned DAEMON=/usr/sbin/tuned PIDFILE=/var/run/tuned.pid TUNED_OPT1=--log TUNED_OPT2=--pid TUNED_OPT3=--daemon SCRIPTNAME=/etc/init.d/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r /etc/default/$NAME ] && . /etc/default/$NAME # Define LSB log_* functions. . /lib/lsb/init-functions do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # other if daemon could not be started or a failure occured start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- $TUNED_OPT1 $TUNED_OPT2 $TUNED_OPT3 } do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # other if daemon could not be stopped or a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --exec $DAEMON } case "$1" in start) if init_is_upstart; then exit 1 fi log_daemon_msg "Starting $DESC" "$TUNED" do_start case "$?" in 0) log_end_msg 0 ;; 1) log_progress_msg "already started" log_end_msg 0 ;; *) log_end_msg 1 ;; esac ;; stop) if init_is_upstart; then exit 0 fi log_daemon_msg "Stopping $DESC" "$TUNED" do_stop case "$?" in 0) log_end_msg 0 ;; 1) log_progress_msg "already stopped" log_end_msg 0 ;; *) log_end_msg 1 ;; esac ;; restart) if init_is_upstart; then exit 1 fi $0 stop $0 start ;; status) status_of_proc -p $PIDFILE $DAEMON $TUNED && exit 0 || exit $? ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|status}" >&2 exit 3 ;; esac : tuned-2.10.0/contrib/upstart/000077500000000000000000000000001331721725100160555ustar00rootroot00000000000000tuned-2.10.0/contrib/upstart/tuned.conf000066400000000000000000000002561331721725100200460ustar00rootroot00000000000000description "Dynamic system tuning daemon" start on runlevel [2345] stop on runlevel [!2345] exec tuned --log --pid --daemon post-stop exec rm -f /var/run/tuned/tuned.pid tuned-2.10.0/dbus.conf000066400000000000000000000011131331721725100145130ustar00rootroot00000000000000 tuned-2.10.0/doc/000077500000000000000000000000001331721725100134605ustar00rootroot00000000000000tuned-2.10.0/doc/README.NFV000066400000000000000000000004271331721725100147730ustar00rootroot00000000000000The package tuned-profiles-nfv is kept for backward compatibility and can be dropped in the future. It has no useful content, it only brings in tuned-profiles-nfv-guest and tuned-profiles-nfv-host dependencies. You can install them by hand and remove tuned-profiles-nfv package. tuned-2.10.0/doc/README.scomes000066400000000000000000000043301331721725100156300ustar00rootroot00000000000000 Author: Jan Hutar Prepare your system: # yum install systemtap # debuginfo-install kernel Usage scomes Binary you want to measure must be named uniquely (or ensure there are no other binaries with same name running on the system). Now run the scomes with the command-line option being name of the binary and then run the binary: # scomes -c " [ ...] [ ...] - measured program how often you want to see current results, value is in seconds and 0 means "show only last results" scomes will start to output statistics each seconds and once binary ends, it will output final statistic like this: Monitored execname: my_binary_3d4f8 Number of syscalls: 0 Kernel/Userspace ticks: 0/0 Read/Written bytes: 0 Transmitted/Recived bytes: 0 Polling syscalls: 0 SCORE: 0 ----------------------------------- Monitored execname: my_binary_3d4f8 Number of syscalls: 3716 Kernel/Userspace ticks: 34/339 Read/Written bytes: 446282 Transmitted/Recived bytes: 16235 Polling syscalls: 2 SCORE: 4479767 ----------------------------------- LAST RESULTS: ----------------------------------- Monitored execname: my_binary_3d4f8 Number of syscalls: 4529 Kernel/Userspace ticks: 44/446 Read/Written bytes: 454352 Transmitted/Recived bytes: 22003 Polling syscalls: 3 SCORE: 4566459 ----------------------------------- QUITTING ----------------------------------- Note: on F11 please call scomes with stap --skip-badvars scomes.stp. Explain statistics Monitored execname name of the binary (passed as a command-line argument) Number of syscalls number of all syscalls performed by the binary Kernel/Userspace ticks count of the processor ticks binary uses in the kernel or in userspace respectively (kticks and uticks variables) Read/Written bytes sum of the read and written bytes from the file binary does (readwrites variable) Transmitted/Recived bytes sum of the read and written bytes from the network binary does (ifxmit and ifrecv variables) Polling syscalls "bad" pooling syscals binary does (poll, select, epoll, itimer, futex, nanosleep, signal) SCORE TODO - but for now: SCORE = kticks + 2*uticks + 10*readwrites + ifxmit + ifrecv tuned-2.10.0/doc/README.utils000066400000000000000000000163411331721725100155040ustar00rootroot00000000000000Systemtap disk and network statistic monitoring tools ===================================================== The netdevstat and diskdevstat are 2 systemtap tools that allow the user to collect detailed information about network and disk activity of all applications running on a system. These 2 tools were inspired by powertop, which shows number of wakeups for every application per second. The basic idea is to collect statistic about the running applications in a form that allows a user to identify power greedy applications. That means f.e. instead of doing fewer and bigger IO operations they do more and smaller ones. Current monitoring tools show typically only the transfer speeds, which isn't very meaningful in that context. To run those tools, you need to have systemtap installed. Then you can simply type netdevstat or diskdevstat and the scripts will start. Both can take up to 3 parameters: diskdevstat [Update interval] [Total duration] [Display histogram at the end] netdevstat [Update interval] [Total duration] [Display histogram at the end] Update interval: Time in seconds between updates for the display. Default: 5 Total duration: Time in seconds for the whole run. Default: 86400 (1 day) Display histogram at the end: Flag if at the end of the execution a histogram for the whole collected data. The output will look similar to top and/or powertop. Here a sample output of a longer diskdevstat run on a Fedora 10 system running KDE 4.2: PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 15494 0 sda1 0 0.000 0.000 0.000 758 0.000 0.012 0.000 0logwatch 15520 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 15549 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 15585 0 sda1 0 0.000 0.000 0.000 108 0.001 0.002 0.000 perl 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 15429 0 sda1 0 0.000 0.000 0.000 62 0.009 0.009 0.000 crond 15379 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15473 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15415 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15433 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15425 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 15375 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15477 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 15469 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 15419 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15481 0 sda1 0 0.000 0.000 0.000 61 0.000 0.001 0.000 crond 15355 0 sda1 0 0.000 0.000 0.000 37 0.000 0.014 0.001 laptop_mode 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd 15575 0 sda1 0 0.000 0.000 0.000 16 0.000 0.000 0.000 cat 15581 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 15582 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 15579 0 sda1 0 0.000 0.000 0.000 12 0.000 0.001 0.000 perl 15580 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 15354 0 sda1 0 0.000 0.000 0.000 12 0.000 0.170 0.014 sh 15584 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 15548 0 sda1 0 0.000 0.000 0.000 12 0.001 0.014 0.001 perl 15577 0 sda1 0 0.000 0.000 0.000 12 0.001 0.003 0.000 perl 15519 0 sda1 0 0.000 0.000 0.000 12 0.001 0.005 0.000 perl 15578 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 15583 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 15547 0 sda1 0 0.000 0.000 0.000 11 0.000 0.002 0.000 perl 15576 0 sda1 0 0.000 0.000 0.000 11 0.001 0.001 0.000 perl 15518 0 sda1 0 0.000 0.000 0.000 11 0.000 0.001 0.000 perl 15354 0 sda1 0 0.000 0.000 0.000 10 0.053 0.053 0.005 lm_lid.sh Here a quick explanation of each column: PID: Process ID of the application UID: User ID under which the applications is running DEV: Device on which the IO took place WRITE_CNT: Total number of write operations WRITE_MIN: Lowest time in seconds for 2 consecutive writes WRITE_MAX: Largest time in seconds for 2 consecutive writes WRITE_AVG: Average time in seconds for 2 consecutive writes READ_CNT: Total number of read operations READ_MIN: Lowest time in seconds for 2 consecutive reads READ_MAX: Largest time in seconds for 2 consecutive reads READ_AVG: Average time in seconds for 2 consecutive reads COMMAND: Name of the process In this example 3 very obvious applications stand out: PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd Those are the 3 applications that have a WRITE_CNT > 0, meaning they performed some form of write during the measurement. Of those, plasma was the worst offender by a large amount. For one in total number of writes and of course the average time between writes was also the lowest. This would be the best candidate to investigate if you're concerned about power inefficient applications. tuned-2.10.0/doc/TIPS.txt000066400000000000000000000043001331721725100147750ustar00rootroot00000000000000=== Simple user tips for improving power usage === * Use a properly dimensioned system for the job (no need for overpowered systems for simple Desktop use e.g.). * For servers consolidate services on fewer systems to maximize efficiency of each system. * For a server farm consolidating all physical machines on one bigger server and then using Virtualization. * Enforce turning off machines that are not used (e.g. company policy). * Try to establish a company culture that is Green "aware", including but not limited to the above point. * Unplug and/or turn off peripherals that aren't used (e.g. external USB devices, monitors, printers, scanners). * Turn off unused hardware already in BIOS. * Disable power hungry features. * Enable CPU scaling if supported for ondemand CPU governor. DO NOT use powersave governor, typically uses more power than ondemand (race to idle). * Put network card to 100 mbit/10 mbit: ** 10 mbit: ethtool -s eth0 advertise 0x002 ** 100 mbit: ethtool -s eth0 advertise 0x008 ** Doesn't work for every card * Put harddisk to spindown fast and full power saving: ** hdparm -S240 /dev/sda (20m idle to spindown) ** hdparm -B1 /dev/sda (Max powersave mode) * Make sure writes to hd don't wake it up too quickly: ** Set flushing to once per 5 minutes ** echo "3000" > /proc/sys/vm/dirty_writeback_centisecs ** Enable laptop mode ** echo "5" > /proc/sys/vm/laptop_mode * Use relatime for your / partition ** mount -o remount,relatime / * Enable USB autosuspend by adding the following to the kernel boot commandline: ** usbcore.autosuspend=5 * Screensaver needs to dpms off the screen, not just make colors black. To turn off monitor after 120s when X is running: ** xset dpms 0 0 120 === Simple programmer tips for improving power usage === * Avoid unnecessary work/computation * Use efficient algorithms * Wake up only when necessary/real work is pending * Do not actively poll in programs or use short regular timeouts, rather react to events * If you wake up, do everything at once (race to idle) and as fast as possible * Use large buffers to avoid frequent disk access. Write one large block at a time * Don't use [f]sync() if not necessary * Group timers across applications if possible (even systems) tuned-2.10.0/experiments/000077500000000000000000000000001331721725100152565ustar00rootroot00000000000000tuned-2.10.0/experiments/kwin-stop/000077500000000000000000000000001331721725100172115ustar00rootroot00000000000000tuned-2.10.0/experiments/kwin-stop/xlib-example.py000066400000000000000000000024171331721725100221560ustar00rootroot00000000000000#!/usr/bin/python -Es from __future__ import print_function import os import Xlib from Xlib import X, display, Xatom dpy = display.Display() def loop(): atoms = {} wm_active_window = dpy.get_atom('_NET_ACTIVE_WINDOW') screens = dpy.screen_count() for num in range(screens): screen = dpy.screen(num) screen.root.change_attributes(event_mask=X.PropertyChangeMask) while True: ev = dpy.next_event() if ev.type == X.PropertyNotify: if ev.atom == wm_active_window: data = ev.window.get_full_property(ev.atom, 0) id = int(data.value.tolist()[0]) hidden = [] showed = [] if id != 0: for num in range(screens): root = dpy.screen(num).root for win in root.get_full_property(dpy.get_atom('_NET_CLIENT_LIST'), 0).value.tolist(): window = dpy.create_resource_object('window', win) if window.get_full_property(dpy.get_atom('_NET_WM_STATE'), Xatom.WINDOW) == None: continue if dpy.get_atom("_NET_WM_STATE_HIDDEN") in window.get_full_property(dpy.get_atom('_NET_WM_STATE'), 0).value.tolist(): if not win in hidden: hidden.append(win) else: if not win in showed: showed.append(win) print("Showed:", showed) print("Minimized:", hidden) if __name__ == '__main__': loop() tuned-2.10.0/experiments/malloc_trim_ldp.c000066400000000000000000000046351331721725100205730ustar00rootroot00000000000000/* * malloc_trim_ldp: A ld-preload library that can be used to potentially * save memory (especially for long running larger apps). * * Copyright (C) 2008-2013 Red Hat, Inc. * Authors: Phil Knirsch * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. * * To compile: * gcc -Wall -fPIC -shared -lpthread -o malloc_trim_ldp.o malloc_trim_ldp.c * * To install: * For a single app: * LD_PRELOAD=./malloc_trim_ldp.o application * * Systemwide: * cp malloc_trim_ldp.o /lib * echo "/lib/malloc_trim_ldp.o" >> /etc/ld.so.preload * * How it works: * This ld-preload library simply redirects the glibc free() call to a new * one that simply has a static counter and every 10.000 free() calls will * call malloc_trim(0) which goes through the heap of an application and * basically releases pages that aren't in use anymore using madvise(). * */ #include #include #include #include #include #include #include #include #include static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; static int malloc_trim_count=0; static void mymalloc_install (void); static void mymalloc_uninstall (void); static void (*old_free_hook) (void *, const void *); static void myfree(void *ptr, const void *caller) { pthread_mutex_lock(&mymutex); malloc_trim_count++; if(malloc_trim_count%10000 == 0) { malloc_trim(0); } mymalloc_uninstall(); free(ptr); mymalloc_install(); pthread_mutex_unlock(&mymutex); } static void mymalloc_install (void) { old_free_hook = __free_hook; __free_hook = myfree; } static void mymalloc_uninstall (void) { __free_hook = old_free_hook; } void (*__malloc_initialize_hook) (void) = mymalloc_install; tuned-2.10.0/experiments/powertop2tuned.py000077500000000000000000000251551331721725100206440ustar00rootroot00000000000000#!/usr/bin/python -Es # -*- coding: utf-8 -*- # # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Jan Kaluža # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function # exception handler for python 2/3 compatibility try: from builtins import chr except ImportError: pass import os import sys import tempfile import shutil import argparse import codecs from subprocess import * # exception handler for python 2/3 compatibility try: from html.parser import HTMLParser from html.entities import name2codepoint except ImportError: from HTMLParser import HTMLParser from htmlentitydefs import name2codepoint SCRIPT_SH = """#!/bin/sh . /usr/lib/tuned/functions start() { %s return 0 } stop() { %s return 0 } process $@ """ TUNED_CONF_PROLOG = "# Automatically generated by powertop2tuned tool\n\n" TUNED_CONF_INCLUDE = """[main] %s\n """ TUNED_CONF_EPILOG="""\n[powertop_script] type=script replace=1 script=script.sh """ class PowertopHTMLParser(HTMLParser): def __init__(self, enable_tunings): HTMLParser.__init__(self) self.inProperTable = False self.inScript = False self.intd = False self.lastStartTag = "" self.tdCounter = 0 self.lastDesc = "" self.data = "" self.currentScript = "" if enable_tunings: self.prefix = "" else: self.prefix = "#" self.plugins = {} def getParsedData(self): return self.data def getPlugins(self): return self.plugins def handle_starttag(self, tag, attrs): self.lastStartTag = tag if self.lastStartTag == "div" and dict(attrs).get("id") == "tuning": self.inProperTable = True if self.inProperTable and tag == "td": self.tdCounter += 1 self.intd = True def parse_command(self, command): prefix = "" command = command.strip() if command[0] == '#': prefix = "#" command = command[1:] if command.startswith("echo") and command.find("/proc/sys") != -1: splitted = command.split("'") value = splitted[1] path = splitted[3] path = path.replace("/proc/sys/", "").replace("/", ".") self.plugins.setdefault("sysctl", "[sysctl]\n") self.plugins["sysctl"] += "#%s\n%s%s=%s\n\n" % (self.lastDesc, prefix, path, value) # TODO: plugins/plugin_sysfs.py doesn't support this so far, it has to be implemented to # let it work properly. elif command.startswith("echo") and (command.find("'/sys/") != -1 or command.find("\"/sys/") != -1): splitted = command.split("'") value = splitted[1] path = splitted[3] if path in ("/sys/module/snd_hda_intel/parameters/power_save", "/sys/module/snd_ac97_codec/parameters/power_save"): self.plugins.setdefault("audio", "[audio]\n") self.plugins["audio"] += "#%s\n%stimeout=1\n" % (self.lastDesc, prefix) else: self.plugins.setdefault("sysfs", "[sysfs]\n") self.plugins["sysfs"] += "#%s\n%s%s=%s\n\n" % (self.lastDesc, prefix, path, value) elif command.startswith("ethtool -s ") and command.endswith("wol d;"): self.plugins.setdefault("net", "[net]\n") self.plugins["net"] += "#%s\n%swake_on_lan=0\n" % (self.lastDesc, prefix) else: return False return True def handle_endtag(self, tag): if self.inProperTable and tag == "table": self.inProperTable = False self.intd = False if tag == "tr": self.tdCounter = 0 self.intd = False if tag == "td": self.intd = False if self.inScript: #print self.currentScript self.inScript = False # Command is not handled, so just store it in the script if not self.parse_command(self.currentScript.split("\n")[-1]): self.data += self.currentScript + "\n\n" def handle_entityref(self, name): if self.inScript: self.currentScript += chr(name2codepoint[name]) def handle_data(self, data): prefix = self.prefix if self.inProperTable and self.intd and self.tdCounter == 1: self.lastDesc = data if self.lastDesc.lower().find("autosuspend") != -1 and (self.lastDesc.lower().find("keyboard") != -1 or self.lastDesc.lower().find("mouse") != -1): self.lastDesc += "\n# WARNING: For some devices, uncommenting this command can disable the device." prefix = "#" if self.intd and ((self.inProperTable and self.tdCounter == 2) or self.inScript): self.tdCounter = 0 if not self.inScript: self.currentScript += "\t# " + self.lastDesc + "\n" self.currentScript += "\t" + prefix + data.strip() self.inScript = True else: self.currentScript += data.strip() class PowertopProfile: BAD_PRIVS = 100 PARSING_ERROR = 101 BAD_SCRIPTSH = 102 def __init__(self, output, profile_name, name = ""): self.profile_name = profile_name self.name = name self.output = output def currentActiveProfile(self): proc = Popen(["tuned-adm", "active"], stdout=PIPE, \ universal_newlines = True) output = proc.communicate()[0] if output and output.find("Current active profile: ") == 0: return output[len("Current active profile: "):output.find("\n")] return None def checkPrivs(self): myuid = os.geteuid() if myuid != 0: print('Run this program as root', file=sys.stderr) return False return True def generateHTML(self): print("Running PowerTOP, please wait...") environment = os.environ.copy() environment["LC_ALL"] = "C" try: proc = Popen(["/usr/sbin/powertop", \ "--html=/tmp/powertop", "--time=1"], \ stdout=PIPE, stderr=PIPE, \ env=environment, \ universal_newlines = True) output = proc.communicate()[1] except (OSError, IOError): print('Unable to execute PowerTOP, is PowerTOP installed?', file=sys.stderr) return -2 if proc.returncode != 0: print('PowerTOP returned error code: %d' % proc.returncode, file=sys.stderr) return -2 prefix = "PowerTOP outputing using base filename " if output.find(prefix) == -1: return -1 name = output[output.find(prefix)+len(prefix):-1] #print "Parsed filename=", [name] return name def parseHTML(self, enable_tunings): f = None data = None parser = PowertopHTMLParser(enable_tunings) try: f = codecs.open(self.name, "r", "utf-8") data = f.read() except (OSError, IOError, UnicodeDecodeError): data = None if f is not None: f.close() if data is None: return "", "" parser.feed(data) return parser.getParsedData(), parser.getPlugins() def generateShellScript(self, data): print("Generating shell script", os.path.join(self.output, "script.sh")) f = None try: f = codecs.open(os.path.join(self.output, "script.sh"), "w", "utf-8") f.write(SCRIPT_SH % (data, "")) os.fchmod(f.fileno(), 0o755) f.close() except (OSError, IOError) as e: print("Error writing shell script: %s" % e, file=sys.stderr) if f is not None: f.close() return False return True def generateTunedConf(self, profile, plugins): print("Generating Tuned config file", os.path.join(self.output, "tuned.conf")) f = codecs.open(os.path.join(self.output, "tuned.conf"), "w", "utf-8") f.write(TUNED_CONF_PROLOG) if profile is not None: if self.profile_name == profile: print('New profile has same name as active profile, not including active profile (avoiding circular deps).', file=sys.stderr) else: f.write(TUNED_CONF_INCLUDE % ("include=" + profile)) for plugin in list(plugins.values()): f.write(plugin + "\n") f.write(TUNED_CONF_EPILOG) f.close() def generate(self, new_profile, merge_profile, enable_tunings): generated_html = False if len(self.name) == 0: generated_html = True if not self.checkPrivs(): return self.BAD_PRIVS name = self.generateHTML() if isinstance(name, int): return name self.name = name data, plugins = self.parseHTML(enable_tunings) if generated_html: os.unlink(self.name) if len(data) == 0 and len(plugins) == 0: print('Your Powertop version is incompatible (maybe too old) or the generated HTML output is malformed', file=sys.stderr) return self.PARSING_ERROR if new_profile is False: if merge_profile is None: profile = self.currentActiveProfile() else: profile = merge_profile else: profile = None if not os.path.exists(self.output): os.makedirs(self.output) if not self.generateShellScript(data): return self.BAD_SCRIPTSH self.generateTunedConf(profile, plugins) return 0 if __name__ == "__main__": parser = argparse.ArgumentParser(description='Creates Tuned profile from Powertop HTML output.') parser.add_argument('profile', metavar='profile_name', type=str, nargs='?', help='Name for the profile to be written.') parser.add_argument('-i', '--input', metavar='input_html', type=str, help='Path to Powertop HTML report. If not given, it is generated automatically.') parser.add_argument('-o', '--output', metavar='output_directory', type=str, help='Directory where the profile will be written, default is /etc/tuned/profile_name directory.') parser.add_argument('-n', '--new-profile', action='store_true', help='Creates new profile, otherwise it merges (include) your current profile.') parser.add_argument('-m', '--merge-profile', action = 'store', help = 'Merges (includes) the specified profile (can be suppressed by -n option).') parser.add_argument('-f', '--force', action='store_true', help='Overwrites the output directory if it already exists.') parser.add_argument('-e', '--enable', action='store_true', help='Enable all tunings (not recommended). Even with this enabled tunings known to be harmful (like USB_AUTOSUSPEND) won''t be enabled.') args = parser.parse_args() args = vars(args) if not args['profile'] and not args['output']: print('You have to specify the profile_name or output directory using the --output argument.', file=sys.stderr) parser.print_help() sys.exit(-1) if not args['output']: args['output'] = "/etc/tuned" if args['profile']: args['output'] = os.path.join(args['output'], args['profile']) if not args['input']: args['input'] = '' if os.path.exists(args['output']) and not args['force']: print('Output directory already exists, use --force to overwrite it.', file=sys.stderr) sys.exit(-1) p = PowertopProfile(args['output'], args['profile'], args['input']) sys.exit(p.generate(args['new_profile'], args['merge_profile'], args['enable'])) tuned-2.10.0/experiments/varcpuload.c000066400000000000000000000117541331721725100175720ustar00rootroot00000000000000/* * varcpuload: Simple tool to create reproducable artifical sustained load on * a machine. * * Copyright (C) 2008-2013 Red Hat, Inc. * Authors: Phil Knirsch * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. * * Usage: varcpuload [-t time] [-n numcpu] [LOAD | MINLOAD MAXLOAD INCREASE] * LOAD, MINLOAD and MAXLOAD need to be between 1 and 100. * * To compile: * gcc -Wall -Os varcpuload.c -o varcpuload -lpthread * * To measure load: * 1st terminal: * for i in `seq 1 2 100`; do ./varcpuload -t 55 -n `/usr/bin/getconf _NPROCESSORS_ONLN` $i; done * or better * ./varcpuload -t 60 -n `/usr/bin/getconf _NPROCESSORS_ONLN` 1 100 2; done * 2nd terminal: * rm -f results; for i in `seq 1 2 100`; do powertop -d -t 60 >> results; done * * make sure the machine is otherwise idle, so start the machine initlevel 3 or even 1 * and stop every unecessary service. * */ #include #include #include #include #include #include #include #include #include #define handle_error_en(en, msg) \ do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0) #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) #define ARRSIZE 512 int sleeptime = 0; int duration = 60; int load = 100; void usage() { fprintf(stderr, "Usage: varload [-t time] [-n numcpu] [LOAD | MINLOAD MAXLOAD INCREASE]\n"); fprintf(stderr, "LOAD, MINLOAD and MAXLOAD need to be between 1 and 100.\n"); } int worker(void) { int i, j; float array[ARRSIZE][ARRSIZE]; for (i = 0; i < ARRSIZE; i++) { for (j = 0; j < ARRSIZE; j++) { array[i][j] = (float)(i + j) / (float)(i + 1); } } return (int)array[1][1]; } int timeDiff(struct timeval *tv1, struct timeval *tv2) { return (tv2->tv_sec - tv1->tv_sec) * 1000000 + tv2->tv_usec - tv1->tv_usec; } int getWorkerTime() { int cnt, i; struct timeval tv1, tv2; cnt = 0; gettimeofday(&tv1, NULL); gettimeofday(&tv2, NULL); // Warmup of 1 sec while (1000000 > timeDiff(&tv1, &tv2)) { i = worker(); usleep(1); gettimeofday(&tv2, NULL); } gettimeofday(&tv1, NULL); gettimeofday(&tv2, NULL); // Meassure for 4 sec while (4*1000000 > timeDiff(&tv1, &tv2)) { i = worker(); usleep(0); gettimeofday(&tv2, NULL); cnt++; } return timeDiff(&tv1, &tv2)/cnt; } static void * runWorker(void *arg) { int i; struct timeval tv1, tv2; gettimeofday(&tv1, NULL); gettimeofday(&tv2, NULL); while (duration > timeDiff(&tv1, &tv2)) { i = worker(); usleep(sleeptime); gettimeofday(&tv2, NULL); } return NULL; } int main(int argc, char *argv[]) { int wtime, numcpu, opt, s, i; int minload, maxload, loadinc; pthread_attr_t attr; pthread_t *tid; void *res; numcpu = 1; while ((opt = getopt(argc, argv, "t:n:")) != -1) { switch (opt) { case 't': duration = atoi(optarg); break; case 'n': numcpu = atoi(optarg); break; default: /* '?' */ usage(); exit(EXIT_FAILURE); } } loadinc = 1; switch (argc - optind) { case 0: minload = 100; maxload = 100; break; case 1: minload = atoi(argv[optind]); maxload = minload; break; case 3: minload = atoi(argv[optind]); maxload = atoi(argv[optind + 1]); loadinc = atoi(argv[optind + 2]); break; default: /* '?' */ usage(); exit(EXIT_FAILURE); } if (minload < 1 || maxload < 1 || minload > 100 || maxload > 100) { usage(); exit(EXIT_FAILURE); } wtime = getWorkerTime(); duration *= 1000000; for (load = minload; load <= maxload; load += loadinc) { sleeptime = wtime * 100 / load - wtime; printf("Starting %d sec run with\n", duration / 1000000); printf("Load: %d\n", load); printf("Worker time: %d\n", wtime); printf("Sleep time: %d\n", sleeptime); printf("Nr. of CPUs to run on: %d\n", numcpu); s = pthread_attr_init(&attr); if (s != 0) handle_error_en(s, "pthread_attr_init"); tid = malloc(sizeof(pthread_t) * numcpu); if (tid == NULL) handle_error("malloc"); for (i = 0; i image/svg+xml tuned-2.10.0/libexec/000077500000000000000000000000001331721725100143265ustar00rootroot00000000000000tuned-2.10.0/libexec/defirqaffinity.py000077500000000000000000000070051331721725100177110ustar00rootroot00000000000000#!/usr/bin/python # Helper script for realtime profiles provided by RT import os import sys irqpath = "/proc/irq/" def bitmasklist(line): fields = line.strip().split(",") bitmasklist = [] entry = 0 for i in range(len(fields) - 1, -1, -1): mask = int(fields[i], 16) while mask != 0: if mask & 1: bitmasklist.append(entry) mask >>= 1 entry += 1 return bitmasklist def get_cpumask(mask): groups = [] comma = 0 while mask: cpumaskstr = '' m = mask & 0xffffffff cpumaskstr += ('%x' % m) if comma: cpumaskstr += ',' comma = 1 mask >>= 32 groups.append(cpumaskstr) string = '' for i in reversed(groups): string += i return string def parse_def_affinity(fname): if os.getuid() != 0: return try: with open(fname, 'r') as f: line = f.readline() return bitmasklist(line) except IOError: return [ 0 ] def verify(shouldbemask): inplacemask = 0 fname = irqpath + "default_smp_affinity"; cpulist = parse_def_affinity(fname) for i in cpulist: inplacemask = inplacemask | 1 << i; if (inplacemask & ~shouldbemask): sys.stderr.write("verify: failed: irqaffinity (%s) inplacemask=%x shouldbemask=%x\n" % (fname, inplacemask, shouldbemask)) sys.exit(1) # now verify each /proc/irq/$num/smp_affinity interruptdirs = [ f for f in os.listdir(irqpath) if os.path.isdir(os.path.join(irqpath,f)) ] # IRQ 2 - cascaded signals from IRQs 8-15 (any devices configured to use IRQ 2 will actually be using IRQ 9) try: interruptdirs.remove("2") except ValueError: pass # IRQ 0 - system timer (cannot be changed) try: interruptdirs.remove("0") except ValueError: pass for i in interruptdirs: inplacemask = 0 fname = irqpath + i + "/smp_affinity" cpulist = parse_def_affinity(fname) for i in cpulist: inplacemask = inplacemask | 1 << i; if (inplacemask & ~shouldbemask): sys.stderr.write("verify: failed: irqaffinity (%s) inplacemask=%x shouldbemask=%x\n" % (fname, inplacemask, shouldbemask)) sys.exit(1) sys.exit(0) # adjust default_smp_affinity cpulist = parse_def_affinity(irqpath + "default_smp_affinity") mask = 0 for i in cpulist: mask = mask | 1 << i; if len(sys.argv) < 3 or len(str(sys.argv[2])) == 0: sys.stderr.write("%s: invalid arguments\n" % os.path.basename(sys.argv[0])) sys.exit(1) line = sys.argv[2] fields = line.strip().split(",") for i in fields: if sys.argv[1] == "add": mask = mask | 1 << int(i); elif sys.argv[1] == "remove" or sys.argv[1] == "verify": mask = mask & ~(1 << int(i)); if sys.argv[1] == "verify": verify(mask) string = get_cpumask(mask) fo = open(irqpath + "default_smp_affinity", "wb") fo.write(string) fo.close() # now adjust each /proc/irq/$num/smp_affinity interruptdirs = [ f for f in os.listdir(irqpath) if os.path.isdir(os.path.join(irqpath,f)) ] # IRQ 2 - cascaded signals from IRQs 8-15 (any devices configured to use IRQ 2 will actually be using IRQ 9) try: interruptdirs.remove("2") except ValueError: pass # IRQ 0 - system timer (cannot be changed) try: interruptdirs.remove("0") except ValueError: pass ret = 0 for i in interruptdirs: fname = irqpath + i + "/smp_affinity" cpulist = parse_def_affinity(fname) mask = 0 for j in cpulist: mask = mask | 1 << j; for j in fields: if sys.argv[1] == "add": mask = mask | 1 << int(j); elif sys.argv[1] == "remove": mask = mask & ~(1 << int(j)); string = get_cpumask(mask) try: fo = open(fname, "wb") fo.write(string) fo.close() except IOError as e: sys.stderr.write('Failed to set smp_affinity for IRQ %s: %s\n' % (str(i), str(e))) ret = 1 sys.exit(ret) tuned-2.10.0/libexec/pmqos-static.py000077500000000000000000000074031331721725100173330ustar00rootroot00000000000000#!/usr/bin/python # # pmqos-static.py: Simple daemon for setting static PM QoS values. It is a part # of 'tuned' and it should not be called manually. # # Copyright (C) 2011 Red Hat, Inc. # Authors: Jan Vcelak # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function import os import signal import struct import sys import time # Used interface is described in Linux kernel documentation: # Documentation/power/pm_qos_interface.txt ALLOWED_INTERFACES = [ "cpu_dma_latency", "network_latency", "network_throughput" ] PIDFILE = "/var/run/tuned/pmqos-static.pid" def do_fork(): pid = os.fork() if pid > 0: sys.exit(0) def close_fds(): f = open('/dev/null', 'w+') os.dup2(f.fileno(), sys.stdin.fileno()) os.dup2(f.fileno(), sys.stdout.fileno()) os.dup2(f.fileno(), sys.stderr.fileno()) def write_pidfile(): with os.fdopen(os.open(PIDFILE, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o644), "w") as f: f.write("%d" % os.getpid()) def daemonize(): do_fork() os.chdir("/") os.setsid() os.umask(0) do_fork() close_fds() def set_pmqos(name, value): filename = "/dev/%s" % name bin_value = struct.pack("i", int(value)) try: fd = os.open(filename, os.O_WRONLY) except OSError: print("Cannot open (%s)." % filename, file=sys.stderr) return None os.write(fd, bin_value) return fd def sleep_forever(): while True: time.sleep(86400) def sigterm_handler(signum, frame): global pmqos_fds if type(pmqos_fds) is list: for fd in pmqos_fds: os.close(fd) sys.exit(0) def run_daemon(options): try: daemonize() write_pidfile() signal.signal(signal.SIGTERM, sigterm_handler) except Exception as e: print("Cannot daemonize (%s)." % e, file=sys.stderr) return False global pmqos_fds pmqos_fds = [] for (name, value) in list(options.items()): try: new_fd = set_pmqos(name, value) if new_fd is not None: pmqos_fds.append(new_fd) except: # we are daemonized pass if len(pmqos_fds) > 0: sleep_forever() else: return False def kill_daemon(force = False): try: with open(PIDFILE, "r") as pidfile: daemon_pid = int(pidfile.read()) except IOError as e: if not force: print("Cannot open PID file (%s)." % e, file=sys.stderr) return False try: os.kill(daemon_pid, signal.SIGTERM) except OSError as e: if not force: print("Cannot terminate the daemon (%s)." % e, file=sys.stderr) return False try: os.unlink(PIDFILE) except OSError as e: if not force: print("Cannot delete the PID file (%s)." % e, file=sys.stderr) return False return True if __name__ == "__main__": disable = False options = {} for option in sys.argv[1:]: if option == "disable": disable = True break try: (name, value) = option.split("=") except ValueError: name = option value = None if name in ALLOWED_INTERFACES and len(value) > 0: options[name] = value else: print("Invalid option (%s)." % option, file=sys.stderr) if disable: sys.exit(0 if kill_daemon() else 1) if len(options) == 0: print("No options set. Not starting.", file=sys.stderr) sys.exit(1) kill_daemon(True) run_daemon(options) sys.exit(1) tuned-2.10.0/man/000077500000000000000000000000001331721725100134665ustar00rootroot00000000000000tuned-2.10.0/man/diskdevstat.8000066400000000000000000000014771331721725100161150ustar00rootroot00000000000000.TH "diskdevstat" "8" "13 Jan 2011" "Phil Knirsch" "Tool for recording harddisk activity" .SH NAME diskdevstat - tool for recording harddisk activity .SH SYNOPSIS \fBdiskdevstat\fP [\fIupdate interval\fP] [\fItotal duration\fP] [\fIdisplay histogram\fP] .SH DESCRIPTION \fBdiskdevstat\fR is a simple systemtap script to record harddisk activity of processes and display statistics for read/write operations. .TP update interval Sets sampling interval in seconds. .TP total duration Sets total measurement time in seconds. .TP display histogram If this parameter is present, the histogram will be shown at the end of the measurement. .SH "SEE ALSO" .LP tuned(8) netdevstat(8) varnetload(8) scomes(8) stap(1) .SH AUTHOR Written by Phil Knirsch . .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/man/netdevstat.8000066400000000000000000000014771331721725100157510ustar00rootroot00000000000000.TH "netdevstat" "8" "13 Jan 2011" "Phil Knirsch" "Tool for recording network activity" .SH NAME netdevstat - tool for recording network activity .SH SYNOPSIS \fBnetdevstat\fP [\fIupdate interval\fP] [\fItotal duration\fP] [\fIdisplay histogram\fP] .SH DESCRIPTION \fBnetdevstat\fR is a simple systemtap script to record network activity of processes and display statistics for transmit/receive operations. .TP update interval Sets sampling interval in seconds. .TP total duration Sets total measurement time in seconds. .TP display histogram If this parameter is present, the histogram will be shown at the end of the measurement. .SH "SEE ALSO" .LP tuned(8) diskdevstat(8) varnetload(8) scomes(8) stap(1) .SH AUTHOR Written by Phil Knirsch . .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/man/scomes.8000066400000000000000000000015571331721725100150600ustar00rootroot00000000000000.TH "scomes" "8" "13 Jan 2011" "Phil Knirsch" "Tool for watching system resources" .SH NAME scomes - tool for watching system resources .SH SYNOPSIS \fBscomes\fP \-c "binary [binary arguments ...]" [timer] .SH DESCRIPTION \fBscomes\fR is a simple systemtap script for watching activity of one process. Syscalls count, userspace and kernelspace ticks, read and written bytes, transmitted and received bytes and polling syscalls are measured. .TP binary Binary file to be executed. This process will be watched. .TP timer Setting this option causes the script to print out statistic every N seconds. If not provided, the statistics are printed only when watched process terminates. .SH "SEE ALSO" .LP tuned(8) diskdevstat(8) netdevstat(8) varnetload(8) stap(1) .SH AUTHOR Written by Jan Hutař . .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/man/tuned-adm.8000066400000000000000000000076751331721725100154540ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jan Kaluža, Jan Včelák .\" * Marcela Mašláňová, Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_ADM "8" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-adm - command line tool for switching between different tuning profiles .SH SYNOPSIS .B tuned\-adm .RB [ list " | " active " | " "profile \fI[profile]\fP" " | " off " | " verify " | " recommend ] .SH DESCRIPTION This command line utility allows you to switch between user definable tuning profiles. Several predefined profiles are already included. You can even create your own profile, either based on one of the existing ones by copying it or make a completely new one. The distribution provided profiles are stored in subdirectories below \fI/usr/lib/tuned\fP and the user defined profiles in subdirectories below \fI/etc/tuned\fP. If there are profiles with the same name in both places, user defined profiles have precedence. .SH "OPTIONS" .SS .TP .B list List all available profiles. .TP .B active Show current active profile. .TP .BI "profile " [PROFILE_NAME] Switches to the given profile. If none is given then all available profiles are listed. If the profile given is not valid the command gracefully exits without performing any operation. .TP .B verify Verifies current profile against system settings. Outputs information whether system settings match current profile or not (e.g. somebody modified a sysfs/sysctl value by hand). Detailed information about what is checked, what value is set and what value is expected can be found in the log. .TP .B recommend Recommend a profile suitable for your system. Currently only static detection is implemented - it decides according to data in \fI/etc/system\-release\-cpe\fP and virt\-what output. The rules for autodetection are defined in the file \fI/usr/lib/tuned/recommend.d/50-tuned.conf\fP. The default rules recommend profiles targeted to the best performance, or the balanced profile if unsure. The default rules can be overridden by the user by putting a file named \fIrecommend.conf\fP into /etc/tuned, or by creating a file in the \fI/etc/tuned/recommend.d\fP directory. The file \fI/etc/tuned/recommend.conf\fP is evaluated first. If no match is found, the files in the \fI/etc/tuned/recommend.d\fP directory are merged with the files in the \fI/usr/lib/tuned/recommend.d\fP directory (if there is a file with the same name in both directories, the one from \fI/etc/tuned/recommend.d\fP is used) and the files are evaluated in alphabetical order. The first matching entry is used. .TP .B off Unload tunings. .SH "FILES" .nf /etc/tuned/* /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-conf (5) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada Jan Kaluža Jan Včelák Marcela Mašláňová Phil Knirsch tuned-2.10.0/man/tuned-main.conf.5000066400000000000000000000101641331721725100165430ustar00rootroot00000000000000.TH "tuned-main.conf" "5" "15 Oct 2013" "Jaroslav Škarvada" "tuned-main.conf file format description" .SH NAME tuned\-main.conf - Tuned global configuration file .SH SYNOPSIS .B /etc/tuned/tuned\-main.conf .SH DESCRIPTION This man page documents format of the Tuned global configuration file. The \fItuned\-main.conf\fR file uses the ini\-file format. .TP .BI daemon= BOOL This defines whether Tuned will use daemon or not. It is boolean value. It can be \fBTrue\fR or \fB1\fR if the daemon is enabled and \fBFalse\fR or \fB0\fR if disabled. It is not recommended to disable daemon, because many functions will not work without daemon, e.g. there will be no D-Bus, no settings rollback, no hotplug support, no dynamic tuning, ... .TP .BI dynamic_tuning= BOOL This defines whether the dynamic tuning is enabled. It is boolean value. It can be \fBTrue\fR or \fB1\fR if the dynamic tuning is enabled and \fBFalse\fR or \fB0\fR if disabled. In such case only the static tuning will be used. Please note if it is enabled here, it is still possible to individually disable it in plugins. It is only applicable if \fBdaemon\fR is enabled. .TP .BI sleep_interval= INT Tuned daemon is periodically waken after \fIINT\fR seconds and checks for events. By default this is set to 1 second. If you have Python 2 interpreter with applied patch from Red Hat Bugzilla #917709 this controls responsiveness time of Tuned to commands (i.e. if you request profile switch, it may take up to 1 second until Tuned reacts). Increase this number for higher responsiveness times and more power savings (due to lower number of wakeups). In case you have unpatched Python 2 interpreter, this settings will have no visible effect, because the interpreter will poll 20 times per second. It is only applicable if \fBdaemon\fR is enabled. .TP .BI update_interval= INT Update interval for dynamic tuning (in seconds). Tuned daemon is periodically waken after \fIINT\fR seconds, updates its monitors, calculates new tuning parameters for enabled plugins and applies the changes. Plugins that have disabled dynamic tuning are not processed. By default the \fIINT\fR is set to 10 seconds. Tuned daemon doesn't periodically wake if dynamic tuning is globally disabled (see \fBdynamic_tuning\fR) or this setting set to 0. This must be multiple of \fBsleep_interval\fR. It is only applicable if \fBdaemon\fR is enabled. .TP .BI recommend_command= BOOL This controls whether recommend functionality will be enabled or not. It is boolean value. It can be \fBTrue\fR or \fB1\fR if the recommend command is enabled and \fBFalse\fR or \fB0\fR if disabled. If disabled \fBrecommend\fR command will be not available in CLI, tuned will not parse \fIrecommend.conf\fR and will return one hardcoded profile (by default \fBbalanced\fR). It is only applicable if \fBdaemon\fR is enabled. By default it's set to \fBTrue\fR. .TP .BI reapply_sysctl= BOOL This controls whether to reapply sysctl settings from the \fI/etc/sysctl.conf\fR, \fI/etc/sysctl.d/*.conf\fR, \fI/usr/lib/sysctl.d/*.conf\fR, \fI/usr/local/lib/sysctl.d/*.conf\fR, \fI/lib/sysctl.d/*.conf\fR, \fI/run/sysctl.d/*.conf\fR, i.e. all locations supported by \fBsysctl --system\fR after Tuned sysctl settings are applied, i.e. if set to \fBTrue\fR or \fB1\fR Tuned sysctl settings will not override system sysctl settings. If set to \fBFalse\fR or \fB0\fR Tuned sysctl settings will override system sysctl settings. By default it's set to \fBTrue\fR. .TP .BI default_instance_priority= INT Default instance (unit) priority. By default it's \fB0\fR. Each unit has a priority which is by default preset to the \fIINT\fR. It can be overridden in the Tuned profile by the \fBpriority\fR option. Tuned units are processed in order defined by their priorities, i.e. unit with the lowest number is processed as the first. .SH EXAMPLE .nf no_daemon = 0 dynamic_tuning = 1 sleep_interval = 1 update_interval = 10 recommend_command = 0 reapply_sysctl = 1 default_instance_priority = 0 .fi .SH FILES .I /etc/tuned/tuned\-main.conf .SH "SEE ALSO" .LP tuned(8) .SH AUTHOR Written by Jaroslav Škarvada . .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/man/tuned-profiles-atomic.7000066400000000000000000000042131331721725100177700ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2014-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_ATOMIC "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-atomic - description of profiles provided for the Project Atomic .SH DESCRIPTION These profiles are provided for the Project Atomic. They provides performance optimizations for the Atomic hosts (bare metal) and virtual guests. .SH PROFILES The following profiles are provided: .TP .BI "atomic\-host" Profile optimized for Atomic hosts (bare metal). It is based on throughput\-performance profile. It additionally increases SELinux AVC cache, PID limit and tunes netfilter connections tracking. .TP .BI "atomic\-guest" Profile optimized for virtual Atomic guests. It is based on virtual\-guest profile. It additionally increases SELinux AVC cache, PID limit and tunes netfilter connections tracking. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-compat.7000066400000000000000000000070741331721725100200070ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jan Kaluža, Jan Včelák, .\" * Marcela Mašláňová, Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_COMPAT "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-compat - description of profiles provided for backward compatibility .SH DESCRIPTION These profiles are provided for backward compatibility with the tuned-1x. They are no longer maintained and may be dropped anytime in the future. It's not recommended to use them for purposes other than backward compatibility. Some of them are only aliases to base tuned profiles. Please do not use the compat profiles on new installations and rather use profiles from base tuned package or other tuned subpackages. .SH PROFILES The following profiles are considered compatibility profiles: .TP .BI "default" It is the lowest of the available profiles in regard to power saving and only enables CPU and disk plugins of tuned. .TP .BI "desktop\-powersave" A power saving profile directed at desktop systems. Enables ALPM power saving for SATA host adapters as well as the CPU, ethernet and disk plugins of tuned. .TP .BI server\-powersave A power saving profile directed at server systems. Enables ALPM power saving for SATA host adapters, and activates the CPU and disk plugins of tuned. .TP .BI laptop\-ac\-powersave Medium power saving profile directed at laptops running on AC. Enables ALPM power saving for SATA host adapters, WiFi power saving as well as CPU, ethernet and disk plugins of tuned. .TP .BI laptop\-battery\-powersave Strong power saving profile directed at laptops running on battery. Currently an alias to powersave profile. .TP .BI "spindown\-disk" Strong power saving profile directed at machines with classic HDDs. It enables aggressive disk spin-down. Disk writeback values are increased and disk swappiness is lowered. Log syncing is disabled. All partitions are remounted with 'noatime' option. .TP .BI "enterprise\-storage" Server profile for high disk throughput performance tuning. Disables power saving mechanisms and enables hugepages. Disk readahead values are increased. CPU governor is set to performance. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .SH AUTHOR .nf Jaroslav Škarvada Jan Kaluža Jan Včelák Marcela Mašláňová Phil Knirsch tuned-2.10.0/man/tuned-profiles-cpu-partitioning.7000066400000000000000000000105011331721725100220050ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Luiz Capitulino .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_CPU_PARTITIONING "7" "22 Feb 2018" "tuned" .SH NAME tuned\-profiles\-cpu\-partitioning - Partition CPUs into isolated and housekeeping. .SH DESCRIPTION The cpu\-partitioning profile partitions the system CPUs into isolated and housekeeping CPUs. This profile is intended to be used for latency\-sensitive workloads. An isolated CPU incurs reduced jitter and reduced interruptions by the kernel. This is achived by clearing the CPU from user\-space processes, movable kernel threads, interruption handlers, kernel timers, etc. The only fixed source of interruptions is the 1Hz tick maintained by the kernel to keep CPU usage statistics. Otherwise, the incurred jitter and interruptions, if any, depend on the kernel services used by the thread running on the isolated CPU. Threads that run a busy loop without doing system calls, such as user\-space drivers that access the hardware directly, are only expected to be interrupted once a second by the 1Hz tick. A housekeeping CPU is the opposite of an isolated CPU. Housekeeping CPUs run all daemons, shell processes, kernel threads, interruption handlers and work that can be dispatched from isolated CPUs such as disk I/O, RCU work, timers, etc. .SH CONFIGURATION The cpu-partitioning profile is configured by editing the .I /etc/tuned/cpu-partitioning-variables.conf file. There are two configuration options: .TP .B isolated_cores= List of CPUs to isolate. This option is mandatory. Any CPUs not in this list is automatically considered a housekeeping CPU. .TP .B no_balance_cores= List of CPUs not be considered by the kernel when doing system wide process load\-balancing. Usually, this list should be the same as isolated_cores=. This option is optional. .SH IMPORTANT NOTES .IP * 2 The system should be rebooted after applying the cpu\-partitioning profile for the first time or changing its configuration .IP * The cpu\-partitioning profile can be used in bare\-metal and virtual machines .IP * The cpu\-partitioning profile does not use the kernel's isolcpus= feature .IP * On a NUMA system, it is recommended to have at least one housekeeping CPU per NUMA node .IP * The cpu\-partitioning profile does not support isolating the L3 cache. This means that a housekeeping CPU can still thrash cache entries pertaining to isolated CPUs. It is recommended to use cache isolation technologies to remedy this problem, such as Intel's Cache Allocation Technology .IP * Whether or not the kernel is going to be able to deactivate the tick on isolated CPUs depend on a few factors concerning the running thread behavior. Please, consult the nohz_full documentation in the kernel to learn more .IP * The Linux real\-time project has put together a document on the best practices for writting real\-time applications. Even though the cpu\-partitioning profile does not guarantee real\-time response time, much of the techniques for writting real\-time applications also apply for applications intended to run under the cpu\-partitioning profile. Please, refer to this document at .I https://rt.wiki.kernel.org .SH "FILES" .nf .I /etc/tuned/cpu\-partitioning\-variables.conf .I /etc/tuned/tuned\-main.conf .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .SH AUTHOR .nf Jaroslav Škarvada Luiz Capitulino Andrew Theurer tuned-2.10.0/man/tuned-profiles-mssql.7000066400000000000000000000031521331721725100176540ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2018 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_MSSQL "7" "05 Jun 2018" "Red Hat, Inc." "tuned" .SH NAME tuned\-profiles\-mssql - description of profile provided for the MS SQL Server .SH DESCRIPTION This profile is provided for the MS SQL Server. It's based on the throughput-performance profile. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-nfv-guest.7000066400000000000000000000033751331721725100204420ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_NFV_GUEST "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-nfv\-guest - description of profile provided for the NFV guest .SH DESCRIPTION The profile is provided for the Network Function Virtualization (NFV) guest. .SH PROFILES The following profile is provided: .TP .BI "realtime\-virtual\-guest" Profile optimized for virtual guests based on realtime profile. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-nfv-host.7000066400000000000000000000033701331721725100202630ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_NFV_HOST "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-nfv\-host - description of profile provided for the NFV host .SH DESCRIPTION The profile is provided for the Network Function Virtualization (NFV) host. .SH PROFILES The following profile is provided: .TP .BI "realtime\-virtual\-host" Profile optimized for virtual hosts based on realtime profile. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-oracle.7000066400000000000000000000035151331721725100177650ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_ORACLE "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-oracle - description of profiles provided for the Oracle .SH DESCRIPTION These profiles are provided for the Oracle loads. .SH PROFILES The following profiles are provided: .TP .BI "oracle" Profile optimized for Oracle databases based on throughput\-performance profile. It additionally disables transparent huge pages and modifies some other performance related kernel parameters. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-realtime.7000066400000000000000000000032571331721725100203250ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_REALTIME "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-realtime - description of profiles provided for the realtime .SH DESCRIPTION These profiles are provided for the realtime. .SH PROFILES The following profiles are provided: .TP .BI "realtime" Profile optimized for realtime. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-sap-hana.7000066400000000000000000000044341331721725100202110ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_SAP_HANA "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-sap\-hana - description of profiles provided for the SAP HANA .SH DESCRIPTION These profiles provides performance optimizations for the SAP HANA applications. .SH PROFILES The following profiles are provided: .TP .BI "sap\-hana" A performance optimized profile for the SAP HANA applications. It is based on throughput\-performance profile. It additionally disables transparent hugepages, locks CPU to the low C states (by PM QoS) and tunes sysctl regarding semaphores. .TP .BI "sap\-hana\-vmware" A performance optimized profile for the SAP HANA applications on VMware. It is based on throughput\-performance profile. It additionally disables transparent hugepages, locks CPU to the low C states (by PM QoS) and tunes sysctl regarding semaphores. It decreases virtual memory swappiness and disables large receive offload for the network adapter. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles-sap.7000066400000000000000000000036661331721725100173120ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_SAP "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles\-sap - description of profiles provided for the SAP NetWeaver .SH DESCRIPTION These profiles provides performance optimizations for the SAP NetWeaver applications. .SH PROFILES The following profiles are provided: .TP .BI "sap\-netweaver" A performance optimized profile for the SAP NetWeaver applications. It is based on throughput\-performance profile. It additionally tunes sysctl settings regarding shared memory, semaphores and maximum number of memory map areas a process may have. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada tuned-2.10.0/man/tuned-profiles.7000066400000000000000000000132371331721725100165240ustar00rootroot00000000000000.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jan Kaluža, Jan Včelák, .\" * Marcela Mašláňová, Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES "7" "30 Mar 2017" "Fedora Power Management SIG" "tuned" .SH NAME tuned\-profiles - description of basic tuned profiles .SH DESCRIPTION These are the base profiles which are mostly shipped in the base tuned package. They are targeted to various goals. Mostly they provide performance optimizations but there are also profiles targeted to low power consumption, low latency and others. You can mostly deduce the purpose of the profile by its name or you can see full description below. The profiles are stored in subdirectories below \fI/usr/lib/tuned\fP. If you need to customize the profiles, you can copy them to \fI/etc/tuned\fP and modify them as you need. When loading profiles with the same name, the /etc/tuned takes precedence. In such case you will not lose your customized profiles between tuned updates. The power saving profiles contain settings that are typically not enabled by default as they will noticeably impact the latency/performance of your system as opposed to the power saving mechanisms that are enabled by default. On the other hand the performance profiles disable the additional power saving mechanisms of tuned as they would negatively impact throughput or latency. .SH PROFILES At the moment we're providing the following pre\-defined profiles: .TP .BI "balanced" It is the default profile. It provides balanced power saving and performance. At the moment it enables CPU and disk plugins of tuned and it makes sure the conservative governor is active (if supported by the current cpufreq driver). It enables ALPM power saving for SATA host adapters and sets the link power management policy to medium_power. It also sets the CPU energy performance bias to normal. It also enables AC97 audio power saving or (it depends on your system) HDA\-Intel power savings with 10 seconds timeout. In case your system contains supported Radeon graphics card (with enabled KMS) it configures it to automatic power saving. .TP .BI "powersave" Maximal power saving, at the moment it enables USB autosuspend (in case environment variable USB_AUTOSUSPEND is set to 1), enables ALPM power saving for SATA host adapters and sets the link power management policy to min_power. It also enables WiFi power saving and makes sure the ondemand governor is active (if supported by the current cpufreq driver). It sets the CPU energy performance bias to powersave. It also enables AC97 audio power saving or (it depends on your system) HDA\-Intel power savings (with 10 seconds timeout). In case your system contains supported Radeon graphics card (with enabled KMS) it configures it to automatic power saving. On Asus Eee PCs dynamic Super Hybrid Engine is enabled. .TP .BI "throughput\-performance" Profile for typical throughput performance tuning. Disables power saving mechanisms and enables sysctl settings that improve the throughput performance of your disk and network IO. CPU governor is set to performance and CPU energy performance bias is set to performance. Disk readahead values are increased. .TP .BI "latency\-performance" Profile for low latency performance tuning. Disables power saving mechanisms. CPU governor is set to performance and locked to the low C states (by PM QoS). CPU energy performance bias to performance. .TP .BI "network\-throughput" Profile for throughput network tuning. It is based on the throughput\-performance profile. It additionally increases kernel network buffers. .TP .BI "network\-latency" Profile for low latency network tuning. It is based on the latency\-performance profile. It additionally disables transparent hugepages, NUMA balancing and tunes several other network related sysctl parameters. .TP .BI "desktop" Profile optimized for desktops based on balanced profile. It additionally enables scheduler autogroups for better response of interactive applications. .TP .BI "virtual\-guest" Profile optimized for virtual guests based on throughput\-performance profile. It additionally decreases virtual memory swappiness and increases dirty_ratio settings. .TP .BI "virtual\-host" Profile optimized for virtual hosts based on throughput\-performance profile. It additionally enables more aggressive writeback of dirty pages. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .SH AUTHOR .nf Jaroslav Škarvada Jan Kaluža Jan Včelák Marcela Mašláňová Phil Knirsch tuned-2.10.0/man/tuned.8000066400000000000000000000050641331721725100147030ustar00rootroot00000000000000.\"/*. .\" * All rights reserved .\" * Copyright (C) 2009-2013 Red Hat, Inc. .\" * Authors: Jan Kaluža, Jan Včelák, Jaroslav Škarvada, .\" * Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\". .TH "tuned" "8" "28 Mar 2012" "Fedora Power Management SIG" "Adaptive system tuning daemon" .SH NAME tuned - dynamic adaptive system tuning daemon .SH SYNOPSIS \fBtuned\fP [\fIoptions\fP] .SH DESCRIPTION \fBtuned\fR is a dynamic adaptive system tuning daemon that tunes system settings dynamically depending on usage. .SH OPTIONS .TP 12 .BI \-d "\fR, \fP" \-\-daemon This options starts \fBtuned\fP as a daemon as opposed to in the foreground without forking at startup. .TP 12 .BI \-D "\fR, \fP" \-\-debug Sets the highest logging level. This could be very useful when having trouble with \fBtuned\fP. .TP 12 .BI \-h "\fR, \fP" \-\-help Show this help. .TP 12 .BI \-l " \fR[" \fILOG "\fR], " \fB\-\-log \fR[ \fB=\fILOG\fR]\fP Log to the file \fILOG\fP. If no \fILOG\fP file is specified \fB/var/log/tuned/tuned.log\fP is used. .TP 12 .BI \--no-dbus Do not attach to DBus. .TP 12 .BI \-P " \fR[" \fIPID "\fR], " \fB\-\-pid \fR[ \fB=\fIPID\fR]\fP Write process ID to the \fBPID\fP file. If no \fIPID\fP file is specified \fB/run/tuned/tuned.pid\fP is used. .TP 12 .BI \-p "\fR \fP" \fIPROFILE\fP "\fR, \fP" \-\-profile "\fR \fP" \fIPROFILE\fP Tunning profile to be activated. It will override other settings (e.g. from \fBtuned-adm\fP). This is intended for debugging purposes. .TP 12 .BI \-v "\fR, \fP" \-\-version Show version information. .SH "FILES" .nf /etc/tuned /usr/share/doc/tuned/README .SH "SEE ALSO" .LP tuned.conf(5) tuned\-adm(8) .SH AUTHOR .nf Jan Kaluža Jan Včelák Jaroslav Škarvada Phil Knirsch .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/man/tuned.conf.5000066400000000000000000000054751331721725100156320ustar00rootroot00000000000000.TH "tuned.conf" "5" "13 Mar 2012" "Jan Kaluža" "tuned.conf file format description" .SH NAME tuned.conf - Tuned profile definition .SH DESCRIPTION This man page documents format of Tuned 2.0 profile definition files. The profile definition is stored in /etc/tuned//tuned.conf or in /usr/lib/tuned//tuned.conf file where the /etc/tuned/ directory has higher priority. The \fBtuned.conf\fR configures the profile and it is in ini-file format. .SH MAIN SECTION The main section is called "[main]" and can contain following options: .TP include= Includes a profile with the given name. This allows you to base a new profile on an already existing profile. In case there are conflicting parameters in the new profile and the base profile, the parameters from the new profile are used. .SH PLUGINS Every other section defines one plugin. The name of the section is used as name for the plugin and is used in logs to identify the plugin. There can be only one plugin of particular type tuning particular device. Conflicts are by default fixed by merging the options of both plugins together. This can be changed by "replace" option. Every plugin section can contain following sections: .TP type= Plugin type. Currently there are following upstream plugins: audio, bootloader, cpu, disk, eeepc_she, modules, mounts, net, script, scsi_host, selinux, scheduler, sysctl, sysfs, systemd, usb, video, vm. This list may be incomplete. If you installed tuned through RPM you can list upstream plugins by the following command: .B rpm -ql tuned | grep 'plugins/plugin_.*.py$' Check the plugins directory returned by this command to see all plugins (e.g. plugins provided by 3rd party packages). .TP devices= Comma separated list of devices which should be tuned by this plugin instance. If you omit this option, all found devices will be tuned. .TP replace=1 If there is conflict between two plugins (meaning two plugins of the same type are trying to configure the same devices), then the plugin defined as last replaces all options defined by the previously defined plugin. .LP Plugins can also have plugin related options. .SH "EXAMPLE" .nf [main] # Includes plugins defined in "included" profile. include=included # Define my_sysctl plugin [my_sysctl] type=sysctl # This plugin will replace any sysctl plugin defined in "included" profile replace=1 # 256 KB default performs well experimentally. net.core.rmem_default = 262144 net.core.wmem_default = 262144 # Define my_script plugin # Both scripts (profile.sh from this profile and script from "included" # profile) will be run, because if there is no "replace=1" option the # default action is merge. [my_script] type=script script=profile.sh .fi .SH "SEE ALSO" .LP tuned(8) .SH AUTHOR Written by Jan Kaluža . .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/man/varnetload.8000066400000000000000000000021741331721725100157220ustar00rootroot00000000000000.TH "varnetload" "8" "13 Jan 2011" "Phil Knirsch" "Tool to create reproducible network traffic" .SH NAME varnetload - tool to create reproducible network traffic .SH SYNOPSIS \fBvarnetload\fP [\fI\-d delay\fP] [\fI\-t time to run\fP] [\fI\-u url\fP] .SH DESCRIPTION \fBvarnetload\fR is a simple python script to create reproducible sustained network traffic. In order to use it effectively, you need to have an HTTP server present in your local LAN where you can put files. Upload a large HTML (or any other kind of file) to the HTTP server. Use the -u option of the script to point to that URL. Play with the delay option to vary the load put on your network. .TP delay Sets delay between individual downloads in milliseconds. Default value is 1000. But you my find values range from 0 to 500 more useful. .TP time to run Sets total run time in seconds. Default value is 60. .TP url Sets downloaded resource. Default is http://myhost.mydomain/index.html. .SH "SEE ALSO" .LP tuned(8) diskdevstat(8) netdevstat(8) scomes(8) .SH AUTHOR Written by Phil Knirsch . .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. tuned-2.10.0/modules.conf000066400000000000000000000012421331721725100152310ustar00rootroot00000000000000# This file specifies additional parameters to kernel modules added by Tuned. # Its content is set by the Tuned modules plugin. # # Please do not edit this file. Content of this file can be overwritten by # switch of Tuned profile. # # If you need to add kernel module parameter which should be handled by Tuned, # create Tuned profile containing the following: # # [modules] # MODULE_NAME = MODULE_PARAMETERS # # Then switch to your newly created profile by: # # tuned-adm profile YOUR_NEW_PROFILE # # and reboot or reload the module # # Tuned tries to automatically reload the module if specified the following # way: # # [modules] # MODULE_NAME = +r,MODULE_PARAMETERS # tuned-2.10.0/profiles/000077500000000000000000000000001331721725100145365ustar00rootroot00000000000000tuned-2.10.0/profiles/atomic-guest/000077500000000000000000000000001331721725100171375ustar00rootroot00000000000000tuned-2.10.0/profiles/atomic-guest/tuned.conf000066400000000000000000000004461331721725100211310ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize virtual guests based on the Atomic variant include=virtual-guest [selinux] avc_cache_threshold=65536 [net] nf_conntrack_hashsize=131072 [sysctl] kernel.pid_max=131072 net.netfilter.nf_conntrack_max=1048576 fs.inotify.max_user_watches=65536 tuned-2.10.0/profiles/atomic-host/000077500000000000000000000000001331721725100167655ustar00rootroot00000000000000tuned-2.10.0/profiles/atomic-host/tuned.conf000066400000000000000000000004621331721725100207550ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize bare metal systems running the Atomic variant include=throughput-performance [selinux] avc_cache_threshold=65536 [net] nf_conntrack_hashsize=131072 [sysctl] kernel.pid_max=131072 net.netfilter.nf_conntrack_max=1048576 fs.inotify.max_user_watches=65536 tuned-2.10.0/profiles/balanced/000077500000000000000000000000001331721725100162675ustar00rootroot00000000000000tuned-2.10.0/profiles/balanced/tuned.conf000066400000000000000000000004741331721725100202620ustar00rootroot00000000000000# # tuned configuration # [main] summary=General non-specialized tuned profile [cpu] governor=conservative energy_perf_bias=normal [audio] timeout=10 [video] radeon_powersave=dpm-balanced, auto [disk] # Comma separated list of devices, all devices if commented out. # devices=sda [scsi_host] alpm=medium_power tuned-2.10.0/profiles/cpu-partitioning/000077500000000000000000000000001331721725100200325ustar00rootroot00000000000000tuned-2.10.0/profiles/cpu-partitioning/00-tuned-pre-udev.sh000077500000000000000000000005671331721725100234620ustar00rootroot00000000000000#!/bin/sh type getargs >/dev/null 2>&1 || . /lib/dracut-lib.sh cpumask="$(getargs tuned.non_isolcpus)" file=/sys/devices/virtual/workqueue/cpumask log() { echo "tuned: $@" >> /dev/kmsg } if [ -n "$cpumask" ]; then log "setting workqueue CPU mask to $cpumask" if ! echo $cpumask > $file 2>/dev/null; then log "ERROR: could not write workqueue CPU mask" fi fi tuned-2.10.0/profiles/cpu-partitioning/cpu-partitioning-variables.conf000066400000000000000000000002241331721725100261410ustar00rootroot00000000000000# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # # To disable the kernel load balancing in certain isolated CPUs: # no_balance_cores=5-10 tuned-2.10.0/profiles/cpu-partitioning/script.sh000077500000000000000000000027751331721725100217100ustar00rootroot00000000000000#!/bin/sh . /usr/lib/tuned/functions no_balance_cpus_file=$STORAGE/no-balance-cpus.txt change_sd_balance_bit() { local set_bit=$1 local flags_cur= local file= local cpu= for cpu in $(cat $no_balance_cpus_file); do for file in $(find /proc/sys/kernel/sched_domain/cpu$cpu -name flags -print); do flags_cur=$(cat $file) if [ $set_bit -eq 1 ]; then flags_cur=$((flags_cur | 0x1)) else flags_cur=$((flags_cur & 0xfffe)) fi echo $flags_cur > $file done done } disable_balance_domains() { change_sd_balance_bit 0 } enable_balance_domains() { change_sd_balance_bit 1 } start() { mkdir -p "${TUNED_tmpdir}/etc/systemd" mkdir -p "${TUNED_tmpdir}/usr/lib/dracut/hooks/pre-udev" cp /etc/systemd/system.conf "${TUNED_tmpdir}/etc/systemd/" cp 00-tuned-pre-udev.sh "${TUNED_tmpdir}/usr/lib/dracut/hooks/pre-udev/" sed -i '/^IRQBALANCE_BANNED_CPUS=/d' /etc/sysconfig/irqbalance echo "IRQBALANCE_BANNED_CPUS=$TUNED_isolated_cpumask" >>/etc/sysconfig/irqbalance setup_kvm_mod_low_latency disable_ksm echo "$TUNED_no_balance_cores_expanded" | sed 's/,/ /g' > $no_balance_cpus_file disable_balance_domains return "$?" } stop() { if [ "$1" = "full_rollback" ] then sed -i '/^IRQBALANCE_BANNED_CPUS=/d' /etc/sysconfig/irqbalance teardown_kvm_mod_low_latency fi enable_ksm enable_balance_domains return "$?" } process $@ tuned-2.10.0/profiles/cpu-partitioning/tuned.conf000066400000000000000000000037601331721725100220260ustar00rootroot00000000000000# tuned configuration # [main] summary=Optimize for CPU partitioning include=network-latency [variables] # User is responsible for updating variables.conf with variable content such as isolated_cores=X-Y include=/etc/tuned/cpu-partitioning-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} # tmpdir tmpdir=${f:strip:${f:exec:/usr/bin/mktemp:-d}} # Non-isolated cores cpumask including offline cores isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cpumask=${f:cpulist2hex:${isolated_cores_expanded}} not_isolated_cores_expanded=${f:cpulist_invert:${isolated_cores_expanded}} isolated_cores_present_expanded=${f:cpulist_present:${isolated_cores}} not_isolated_cores_present_expanded=${f:cpulist_present:${not_isolated_cores_expanded}} not_isolated_cpumask=${f:cpulist2hex:${not_isolated_cores_expanded}} no_balance_cores_expanded=${f:cpulist_unpack:${no_balance_cores}} # Fail if isolated_cores contains CPUs which are not present assert2=${f:assertion:isolated_cores contains present CPU(s):${isolated_cores_expanded}:${isolated_cores_present_expanded}} [sysctl] kernel.hung_task_timeout_secs = 600 kernel.nmi_watchdog = 0 vm.stat_interval = 10 kernel.timer_migration = 1 [sysfs] /sys/bus/workqueue/devices/writeback/cpumask = ${not_isolated_cpumask} /sys/devices/virtual/workqueue/cpumask = ${not_isolated_cpumask} /sys/devices/system/machinecheck/machinecheck*/ignore_ce = 1 [systemd] cpu_affinity=${not_isolated_cores_expanded} [script] priority=5 script=${i:PROFILE_DIR}/script.sh [scheduler] isolated_cores=${isolated_cores} ps_blacklist=.*pmd.*;.*PMD.*;^DPDK;.*qemu-kvm.* [bootloader] priority=10 initrd_remove_dir=True initrd_dst_img=tuned-initrd.img initrd_add_dir=${tmpdir} cmdline_cpu_part=+nohz=on nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} tuned.non_isolcpus=${not_isolated_cpumask} intel_pstate=disable nosoftlockup tuned-2.10.0/profiles/default/000077500000000000000000000000001331721725100161625ustar00rootroot00000000000000tuned-2.10.0/profiles/default/tuned.conf000066400000000000000000000002451331721725100201510ustar00rootroot00000000000000# # tuned configuration # [main] summary=Legacy default tuned profile [cpu] [disk] # Comma separated list of devices, all devices if commented out. # devices=sda tuned-2.10.0/profiles/desktop-powersave/000077500000000000000000000000001331721725100202205ustar00rootroot00000000000000tuned-2.10.0/profiles/desktop-powersave/tuned.conf000066400000000000000000000003711331721725100222070ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optmize for the desktop use-case with power saving include=server-powersave [video] radeon_powersave=dpm-battery, auto [net] # Comma separated list of devices, all devices if commented out. # devices=eth0 tuned-2.10.0/profiles/desktop/000077500000000000000000000000001331721725100162075ustar00rootroot00000000000000tuned-2.10.0/profiles/desktop/tuned.conf000066400000000000000000000002101331721725100201660ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for the desktop use-case include=balanced [sysctl] kernel.sched_autogroup_enabled=1 tuned-2.10.0/profiles/enterprise-storage/000077500000000000000000000000001331721725100203605ustar00rootroot00000000000000tuned-2.10.0/profiles/enterprise-storage/tuned.conf000066400000000000000000000002301331721725100223410ustar00rootroot00000000000000# # tuned configuration # [main] summary=Legacy profile for RHEL6, for RHEL7, please use throughput-performance profile include=throughput-performance tuned-2.10.0/profiles/functions000066400000000000000000000332551331721725100165010ustar00rootroot00000000000000# # This is library of helper functions that can be used in scripts in tuned profiles. # # API provided by this library is under heavy development and could be changed anytime # # # Config # STORAGE=/run/tuned STORAGE_PERSISTENT=/var/lib/tuned STORAGE_SUFFIX=".save" # # Helpers # # Save value # $0 STORAGE_NAME VALUE save_value() { [ "$#" -ne 2 ] && return [ "$2" -a -e "${STORAGE}" ] && echo "$2" > "${STORAGE}/${1}${STORAGE_SUFFIX}" } # Parse sysfs value, i.e. for "val1 [val2] val3" return "val2" # $0 SYSFS_NAME parse_sys() { local V1 V2 [ -r "$1" ] || return V1=`cat "$1"` V2="${V1##*[}" V2="${V2%%]*}" echo "${V2:-$V1}" } # Save sysfs value # $0 STORAGE_NAME SYSFS_NAME save_sys() { [ "$#" -ne 2 ] && return [ -r "$2" -a ! -e "${STORAGE}/${1}${STORAGE_SUFFIX}" ] && parse_sys "$2" > "${STORAGE}/${1}${STORAGE_SUFFIX}" } # Set sysfs value # $0 SYSFS_NAME VALUE set_sys() { [ "$#" -ne 2 ] && return [ -w "$1" ] && echo "$2" > "$1" } # Save and set sysfs value # $0 STORAGE_NAME SYSFS_NAME VALUE save_set_sys() { [ "$#" -ne 3 ] && return save_sys "$1" "$2" set_sys "$2" "$3" } # Get stored sysfs value from storage # $0 STORAGE_NAME get_stored_sys() { [ "$#" -ne 1 ] && return [ -r "${STORAGE}/${1}${STORAGE_SUFFIX}" ] && cat "${STORAGE}/${1}${STORAGE_SUFFIX}" } # Restore value from storage # $0 STORAGE_NAME restore_value() { [ "$#" -ne 1 ] && return _rs_value="`get_stored_sys \"$1\"`" unlink "${STORAGE}/${1}${STORAGE_SUFFIX}" >/dev/null 2>&1 [ "$_rs_value" ] && echo "$_rs_value" } # Restore sysfs value from storage, if nothing is stored, use VALUE # $0 STORAGE_NAME SYSFS_NAME [VALUE] restore_sys() { [ "$#" -lt 2 -o "$#" -gt 3 ] && return _rs_value="`get_stored_sys \"$1\"`" unlink "${STORAGE}/${1}${STORAGE_SUFFIX}" >/dev/null 2>&1 [ "$_rs_value" ] || _rs_value="$3" [ "$_rs_value" ] && set_sys "$2" "$_rs_value" } # # DISK tuning # DISKS_DEV="$(command ls -d1 /dev/[shv]d*[a-z] 2>/dev/null)" DISKS_SYS="$(command ls -d1 /sys/block/{sd,cciss,dm-,vd,dasd,xvd}* 2>/dev/null)" _check_elevator_override() { /bin/fgrep -q 'elevator=' /proc/cmdline } # $0 OPERATOR DEVICES ELEVATOR _set_elevator_helper() { _check_elevator_override && return SYS_BLOCK_SDX="" [ "$2" ] && SYS_BLOCK_SDX=$(eval LANG=C /bin/ls -1 "${2}" 2>/dev/null) # if there is no kernel command line elevator settings, apply the elevator if [ "$1" -a "$SYS_BLOCK_SDX" ]; then for i in $SYS_BLOCK_SDX; do se_dev="`echo \"$i\" | sed 's|/sys/block/\([^/]\+\)/queue/scheduler|\1|'`" $1 "elevator_${se_dev}" "$i" "$3" done fi } # $0 DEVICES ELEVATOR set_elevator() { _set_elevator_helper save_set_sys "$1" "$2" } # $0 DEVICES [ELEVATOR] restore_elevator() { re_elevator="$2" [ "$re_elevator" ] || re_elevator=cfq _set_elevator_helper restore_sys "$1" "$re_elevator" } # SATA Aggressive Link Power Management # usage: set_disk_alpm policy set_disk_alpm() { policy=$1 for host in /sys/class/scsi_host/*; do if [ -f $host/ahci_port_cmd ]; then port_cmd=`cat $host/ahci_port_cmd`; if [ $((0x$port_cmd & 0x240000)) = 0 -a -f $host/link_power_management_policy ]; then echo $policy >$host/link_power_management_policy; else echo "max_performance" >$host/link_power_management_policy; fi fi done } # usage: set_disk_apm level set_disk_apm() { level=$1 for disk in $DISKS_DEV; do hdparm -B $level $disk &>/dev/null done } # usage: set_disk_spindown level set_disk_spindown() { level=$1 for disk in $DISKS_DEV; do hdparm -S $level $disk &>/dev/null done } # usage: multiply_disk_readahead by multiply_disk_readahead() { by=$1 # float multiplication not supported in bash # bc might not be installed, python is available for sure for disk in $DISKS_SYS; do control="${disk}/queue/read_ahead_kb" old=$(cat $control) new=$(echo "print int($old*$by)" | python) (echo $new > $control) &>/dev/null done } # usage: remount_disk options partition1 partition2 ... remount_partitions() { options=$1 shift for partition in $@; do mount -o remount,$options $partition >/dev/null 2>&1 done } remount_all_no_rootboot_partitions() { [ "$1" ] || return # Find non-root and non-boot partitions, disable barriers on them rootvol=$(df -h / | grep "^/dev" | awk '{print $1}') bootvol=$(df -h /boot | grep "^/dev" | awk '{print $1}') volumes=$(df -hl --exclude=tmpfs | grep "^/dev" | awk '{print $1}') nobarriervols=$(echo "$volumes" | grep -v $rootvol | grep -v $bootvol) remount_partitions "$1" $nobarriervols } DISK_QUANTUM_SAVE="${STORAGE}/disk_quantum${STORAGE_SUFFIX}" set_disk_scheduler_quantum() { value=$1 rm -f "$DISK_QUANTUM_SAVE" for disk in $DISKS_SYS; do control="${disk}/queue/iosched/quantum" echo "echo $(cat $control) > $control" >> "$DISK_QUANTUM_SAVE" 2>/dev/null (echo $value > $control) &2>/dev/null done } restore_disk_scheduler_quantum() { if [ -r "$DISK_QUANTUM_SAVE" ]; then /bin/sh "$DISK_QUANTUM_SAVE" &>/dev/null rm -f "$DISK_QUANTUM_SAVE" fi } # # CPU tuning # CPUSPEED_SAVE_FILE="${STORAGE}/cpuspeed${STORAGE_SUFFIX}" CPUSPEED_ORIG_GOV="${STORAGE}/cpuspeed-governor-%s${STORAGE_SUFFIX}" CPUSPEED_STARTED="${STORAGE}/cpuspeed-started" CPUSPEED_CFG="/etc/sysconfig/cpuspeed" CPUSPEED_INIT="/etc/init.d/cpuspeed" # do not use cpuspeed CPUSPEED_USE="0" CPUS="$(ls -d1 /sys/devices/system/cpu/cpu* | sed 's;^.*/;;' | grep "cpu[0-9]\+")" # set CPU governor setting and store the old settings # usage: set_cpu_governor governor set_cpu_governor() { governor=$1 # always patch cpuspeed configuration if exists, if it doesn't exist and is enabled, # explictly disable it with hint if [ -e $CPUSPEED_INIT ]; then if [ ! -e $CPUSPEED_SAVE_FILE -a -e $CPUSPEED_CFG ]; then cp -p $CPUSPEED_CFG $CPUSPEED_SAVE_FILE sed -e 's/^GOVERNOR=.*/GOVERNOR='$governor'/g' $CPUSPEED_SAVE_FILE > $CPUSPEED_CFG fi else if [ "$CPUSPEED_USE" = "1" ]; then echo >&2 echo "Suggestion: install 'cpuspeed' package to get best tuning results." >&2 echo "Falling back to sysfs control." >&2 echo >&2 fi CPUSPEED_USE="0" fi if [ "$CPUSPEED_USE" = "1" ]; then service cpuspeed status &> /dev/null [ $? -eq 3 ] && touch $CPUSPEED_STARTED || rm -f $CPUSPEED_STARTED service cpuspeed restart &> /dev/null # direct change using sysfs elif [ -e /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ]; then for cpu in $CPUS; do gov_file=/sys/devices/system/cpu/$cpu/cpufreq/scaling_governor save_file=$(printf $CPUSPEED_ORIG_GOV $cpu) rm -f $save_file if [ -e $gov_file ]; then cat $gov_file > $save_file echo $governor > $gov_file fi done fi } # re-enable previous CPU governor settings # usage: restore_cpu_governor restore_cpu_governor() { if [ -e $CPUSPEED_INIT ]; then if [ -e $CPUSPEED_SAVE_FILE ]; then cp -fp $CPUSPEED_SAVE_FILE $CPUSPEED_CFG rm -f $CPUSPEED_SAVE_FILE fi if [ "$CPUSPEED_USE" = "1" ]; then if [ -e $CPUSPEED_STARTED ]; then service cpuspeed stop &> /dev/null else service cpuspeed restart &> /dev/null fi fi if [ -e $CPUSPEED_STARTED ]; then rm -f $CPUSPEED_STARTED fi else CPUSPEED_USE="0" fi if [ "$CPUSPEED_USE" != "1" -a -e /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ]; then for cpu in $CPUS; do cpufreq_dir=/sys/devices/system/cpu/$cpu/cpufreq save_file=$(printf $CPUSPEED_ORIG_GOV $cpu) if [ -e $cpufreq_dir/scaling_governor ]; then if [ -e $save_file ]; then cat $save_file > $cpufreq_dir/scaling_governor rm -f $save_file else echo userspace > $cpufreq_dir/scaling_governor cat $cpufreq_dir/cpuinfo_max_freq > $cpufreq_dir/scaling_setspeed fi fi done fi } _cpu_multicore_powersave() { value=$1 [ -e /sys/devices/system/cpu/sched_mc_power_savings ] && echo $value > /sys/devices/system/cpu/sched_mc_power_savings } # enable multi core power savings for low wakeup systems enable_cpu_multicore_powersave() { _cpu_multicore_powersave 1 } disable_cpu_multicore_powersave() { _cpu_multicore_powersave 0 } # # MEMORY tuning # THP_ENABLE="/sys/kernel/mm/transparent_hugepage/enabled" THP_SAVE="${STORAGE}/thp${STORAGE_SUFFIX}" [ -e "$THP_ENABLE" ] || THP_ENABLE="/sys/kernel/mm/redhat_transparent_hugepage/enabled" enable_transparent_hugepages() { if [ -e $THP_ENABLE ]; then cut -f2 -d'[' $THP_ENABLE | cut -f1 -d']' > $THP_SAVE (echo always > $THP_ENABLE) &> /dev/null fi } restore_transparent_hugepages() { if [ -e $THP_SAVE ]; then (echo $(cat $THP_SAVE) > $THP_ENABLE) &> /dev/null rm -f $THP_SAVE fi } # # WIFI tuning # # usage: _wifi_set_power_level level _wifi_set_power_level() { # 0 auto, PM enabled # 1-5 least savings and lowest latency - most savings and highest latency # 6 disable power savings level=$1 # apply the settings using iwpriv ifaces=$(cat /proc/net/wireless | grep -v '|' | sed 's@^ *\([^:]*\):.*@\1@') for iface in $ifaces; do iwpriv $iface set_power $level done # some adapters may relay on sysfs for i in /sys/bus/pci/devices/*/power_level; do (echo $level > $i) &> /dev/null done } enable_wifi_powersave() { _wifi_set_power_level 5 } disable_wifi_powersave() { _wifi_set_power_level 0 } # # BLUETOOTH tuning # disable_bluetooth() { hciconfig hci0 down >/dev/null 2>&1 lsmod | grep -q btusb && rmmod btusb } enable_bluetooth() { modprobe btusb hciconfig hci0 up >/dev/null 2>&1 } # # USB tuning # _usb_autosuspend() { value=$1 for i in /sys/bus/usb/devices/*/power/autosuspend; do echo $value > $i; done &> /dev/null } enable_usb_autosuspend() { _usb_autosuspend 1 } disable_usb_autosuspend() { _usb_autosuspend 0 } # # SOUND CARDS tuning # enable_snd_ac97_powersave() { save_set_sys ac97 /sys/module/snd_ac97_codec/parameters/power_save Y } disable_snd_ac97_powersave() { save_set_sys ac97 /sys/module/snd_ac97_codec/parameters/power_save N } restore_snd_ac97_powersave() { restore_sys ac97 /sys/module/snd_ac97_codec/parameters/power_save $1 } set_hda_intel_powersave() { save_set_sys hda_intel /sys/module/snd_hda_intel/parameters/power_save $1 } restore_hda_intel_powersave() { restore_sys hda_intel /sys/module/snd_hda_intel/parameters/power_save $1 } # # VIDEO CARDS tuning # # Power savings settings for Radeon # usage: set_radeon_powersave dynpm | default | low | mid | high set_radeon_powersave () { [ "$1" ] || return [ -e /sys/class/drm/card0/device/power_method ] || return if [ "$1" = default -o "$1" = auto -o "$1" = low -o "$1" = med -o "$1" = high ]; then [ -w /sys/class/drm/card0/device/power_profile ] || return save_sys radeon_profile /sys/class/drm/card0/device/power_profile save_set_sys radeon_method /sys/class/drm/card0/device/power_method profile set_sys /sys/class/drm/card0/device/power_profile "$1" elif [ "$1" = dynpm ]; then save_sys radeon_profile /sys/class/drm/card0/device/power_profile save_set_sys radeon_method /sys/class/drm/card0/device/power_method dynpm fi } restore_radeon_powersave () { restore_sys radeon_method /sys/class/drm/card0/device/power_method profile _rrp_method="`get_stored_sys radeon_method`" [ -z "$_rrp_method" -o _rrp_method="profile" ] && restore_sys radeon_profile /sys/class/drm/card0/device/power_profile default } # # SOFTWARE tuning # RSYSLOG_CFG="/etc/rsyslog.conf" RSYSLOG_SAVE="${STORAGE}/cpuspeed${STORAGE_SUFFIX}" disable_logs_syncing() { cp -p $RSYSLOG_CFG $RSYSLOG_SAVE sed -i 's/ \/var\/log/-\/var\/log/' $RSYSLOG_CFG } restore_logs_syncing() { mv -Z $RSYSLOG_SAVE $RSYSLOG_CFG || mv $RSYSLOG_SAVE $RSYSLOG_CFG } # # HARDWARE SPECIFIC tuning # # Asus EEE with Intel Atom _eee_fsb_control() { value=$1 if [ -e /sys/devices/platform/eeepc/she ]; then echo $value > /sys/devices/platform/eeepc/she elif [ -e /sys/devices/platform/eeepc/cpufv ]; then echo $value > /sys/devices/platform/eeepc/cpufv elif [ -e /sys/devices/platform/eeepc-wmi/cpufv ]; then echo $value > /sys/devices/platform/eeepc-wmi/cpufv fi } eee_set_reduced_fsb() { _eee_fsb_control 2 } eee_set_normal_fsb() { _eee_fsb_control 1 } # # modprobe configuration handling # kvm_modprobe_file=/etc/modprobe.d/kvm.rt.tuned.conf setup_kvm_mod_low_latency() { if [ -f $kvm_modprobe_file ]; then return fi modinfo -p kvm | grep -q kvmclock_periodic_sync if [ "$?" -eq 0 ]; then echo "options kvm kvmclock_periodic_sync=0" > $kvm_modprobe_file fi modinfo -p kvm_intel | grep -q ple_gap if [ "$?" -eq 0 ]; then echo "options kvm_intel ple_gap=0" >> $kvm_modprobe_file fi } teardown_kvm_mod_low_latency() { rm -f $kvm_modprobe_file } # # KSM # KSM_SERVICES="ksm ksmtuned" KSM_RUN_PATH=/sys/kernel/mm/ksm/run disable_ksm() { for s in $KSM_SERVICES; do if systemctl is-enabled -q $s; then systemctl -q disable $s fi if systemctl is-active -q $s; then systemctl -q stop $s fi done if [ -f $KSM_RUN_PATH ]; then # Unmerge all shared pages echo 2 > $KSM_RUN_PATH fi } enable_ksm() { for s in $KSM_SERVICES; do systemctl -q preset $s # Only start the service if it's enabled by defaut if systemctl is-enabled -q $s; then systemctl start $s fi done } die() { echo "$@" >&2 exit 1 } # # ACTION PROCESSING # error_not_implemented() { echo "tuned: script function '$1' is not implemented." >&2 } # implicit actions, will be used if not provided by profile script: # # * start must be implemented # * stop must be implemented start() { error_not_implemented start return 16 } stop() { error_not_implemented stop return 16 } # # main processing # process() { ARG="$1" shift case "$ARG" in start) start "$@" RETVAL=$? ;; stop) stop "$@" RETVAL=$? ;; verify) if declare -f verify &> /dev/null; then verify "$@" else : fi RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|verify}" RETVAL=2 ;; esac exit $RETVAL } tuned-2.10.0/profiles/laptop-ac-powersave/000077500000000000000000000000001331721725100204275ustar00rootroot00000000000000tuned-2.10.0/profiles/laptop-ac-powersave/script.sh000077500000000000000000000002321331721725100222670ustar00rootroot00000000000000#!/bin/sh . /usr/lib/tuned/functions start() { enable_wifi_powersave return 0 } stop() { disable_wifi_powersave return 0 } process $@ tuned-2.10.0/profiles/laptop-ac-powersave/tuned.conf000066400000000000000000000002271331721725100224160ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for laptop with power savings include=desktop-powersave [script] script=${i:PROFILE_DIR}/script.sh tuned-2.10.0/profiles/laptop-battery-powersave/000077500000000000000000000000001331721725100215165ustar00rootroot00000000000000tuned-2.10.0/profiles/laptop-battery-powersave/tuned.conf000066400000000000000000000001661331721725100235070ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize laptop profile with more aggressive power saving include=powersave tuned-2.10.0/profiles/latency-performance/000077500000000000000000000000001331721725100204745ustar00rootroot00000000000000tuned-2.10.0/profiles/latency-performance/tuned.conf000066400000000000000000000030161331721725100224620ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for deterministic performance at the cost of increased power consumption [cpu] force_latency=1 governor=performance energy_perf_bias=performance min_perf_pct=100 [sysctl] # ktune sysctl settings for rhel6 servers, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio=10 # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio=3 # The swappiness parameter controls the tendency of the kernel to move # processes out of physical memory and onto the swap disk. # 0 tells the kernel to avoid swapping processes out of physical memory # for as long as possible # 100 tells the kernel to aggressively swap processes out of physical memory # and move them to swap cache vm.swappiness=10 # The total time the scheduler will consider a migrated process # "cache hot" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 tuned-2.10.0/profiles/mssql/000077500000000000000000000000001331721725100156755ustar00rootroot00000000000000tuned-2.10.0/profiles/mssql/tuned.conf000066400000000000000000000002061331721725100176610ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for MS SQL Server include=throughput-performance [sysctl] vm.max_map_count=262144 tuned-2.10.0/profiles/network-latency/000077500000000000000000000000001331721725100176645ustar00rootroot00000000000000tuned-2.10.0/profiles/network-latency/tuned.conf000066400000000000000000000005561331721725100216600ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance include=latency-performance [vm] transparent_hugepages=never [sysctl] net.core.busy_read=50 net.core.busy_poll=50 net.ipv4.tcp_fastopen=3 kernel.numa_balancing=0 [bootloader] cmdline=skew_tick=1 tuned-2.10.0/profiles/network-throughput/000077500000000000000000000000001331721725100204365ustar00rootroot00000000000000tuned-2.10.0/profiles/network-throughput/tuned.conf000066400000000000000000000010361331721725100224240ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks include=throughput-performance [sysctl] # Increase kernel buffer size maximums. Currently this seems only necessary at 40Gb speeds. # # The buffer tuning values below do not account for any potential hugepage allocation. # Ensure that you do not oversubscribe system memory. net.ipv4.tcp_rmem="4096 87380 16777216" net.ipv4.tcp_wmem="4096 16384 16777216" net.ipv4.udp_mem="3145728 4194304 16777216" tuned-2.10.0/profiles/oracle/000077500000000000000000000000001331721725100160035ustar00rootroot00000000000000tuned-2.10.0/profiles/oracle/tuned.conf000066400000000000000000000011531331721725100177710ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for Oracle RDBMS include=throughput-performance [sysctl] vm.swappiness = 10 vm.dirty_background_ratio = 3 vm.dirty_ratio = 40 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 kernel.shmmax = 4398046511104 kernel.shmall = 1073741824 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6815744 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65499 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 kernel.panic_on_oops = 1 [vm] transparent_hugepages=never tuned-2.10.0/profiles/powersave/000077500000000000000000000000001331721725100165515ustar00rootroot00000000000000tuned-2.10.0/profiles/powersave/script.sh000077500000000000000000000004151331721725100204140ustar00rootroot00000000000000#!/bin/sh . /usr/lib/tuned/functions start() { [ "$USB_AUTOSUSPEND" = 1 ] && enable_usb_autosuspend enable_wifi_powersave return 0 } stop() { [ "$USB_AUTOSUSPEND" = 1 ] && disable_usb_autosuspend disable_wifi_powersave return 0 } process $@ tuned-2.10.0/profiles/powersave/tuned.conf000066400000000000000000000010431331721725100205350ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for low power consumption [cpu] governor=ondemand energy_perf_bias=powersave|power [eeepc_she] [vm] [audio] timeout=10 [video] radeon_powersave=dpm-battery, auto [disk] # Comma separated list of devices, all devices if commented out. # devices=sda [net] # Comma separated list of devices, all devices if commented out. # devices=eth0 [scsi_host] alpm=min_power [sysctl] vm.laptop_mode=5 vm.dirty_writeback_centisecs=1500 kernel.nmi_watchdog=0 [script] script=${i:PROFILE_DIR}/script.sh tuned-2.10.0/profiles/realtime-virtual-guest/000077500000000000000000000000001331721725100211515ustar00rootroot00000000000000tuned-2.10.0/profiles/realtime-virtual-guest/realtime-virtual-guest-variables.conf000066400000000000000000000000731331721725100304010ustar00rootroot00000000000000# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # tuned-2.10.0/profiles/realtime-virtual-guest/tuned.conf000066400000000000000000000027401331721725100231420ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for realtime workloads running within a KVM guest include=realtime [variables] # User is responsible for adding isolated_cores=X-Y to realtime-virtual-guest-variables.conf include=/etc/tuned/realtime-virtual-guest-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cores_present_expanded=${f:cpulist_present:${isolated_cores}} # Fail if isolated_cores contains CPUs which are not present assert2=${f:assertion:isolated_cores contains present CPU(s):${isolated_cores_expanded}:${isolated_cores_present_expanded}} [scheduler] # group.group_name=rule_priority:scheduler_policy:scheduler_priority:core_affinity_in_hex:process_name_regex # for i in `pgrep ksoftirqd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ksoftirqd=0:f:2:*:ksoftirqd.* # for i in `pgrep rcuc` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcuc=0:f:4:*:rcuc.* # for i in `pgrep rcub` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcub=0:f:4:*:rcub.* # for i in `pgrep ktimersoftd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ktimersoftd=0:f:3:*:ktimersoftd.* ps_blacklist=ksoftirqd.*;rcuc.*;rcub.*;ktimersoftd.* [bootloader] cmdline_rvg=+nohz=on nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} tuned-2.10.0/profiles/realtime-virtual-host/000077500000000000000000000000001331721725100207775ustar00rootroot00000000000000tuned-2.10.0/profiles/realtime-virtual-host/find-lapictscdeadline-optimal.sh000077500000000000000000000012561331721725100272130ustar00rootroot00000000000000#!/bin/bash : ${1?"Usage: $0 latency-file"} lines=`wc -l $1 | cut -f 1 -d " "` in_range=0 prev_value=1 for i in `seq 1 $lines`; do a=`awk "NR==$i" $1 | cut -f 2 -d ":"` value=$(($a*100/$prev_value)) if [ $value -ge 98 -a $value -le 102 ]; then in_range=$(($in_range + 1)) else in_range=0 fi if [ $in_range -ge 2 ]; then echo -n "optimal value for lapic_timer_advance_ns is: " awk "NR==$(($i - 1))" $1 | cut -f 1 -d ":" exit 0 fi prev_value=$a done # if still decreasing, then use highest ns value if [ $value -le 99 ]; then echo -n "optimal value for lapic_timer_advance_ns is: " awk "NR==$(($i - 1))" $1 | cut -f 1 -d ":" exit 0 fi echo optimal not found exit 1 tuned-2.10.0/profiles/realtime-virtual-host/realtime-virtual-host-variables.conf000066400000000000000000000000731331721725100300550ustar00rootroot00000000000000# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # tuned-2.10.0/profiles/realtime-virtual-host/script.sh000077500000000000000000000061001331721725100226370ustar00rootroot00000000000000#!/bin/sh . /usr/lib/tuned/functions CACHE_VALUE_FILE=./lapic_timer_adv_ns CACHE_CPU_FILE=./lapic_timer_adv_ns.cpumodel KVM_LAPIC_FILE=/sys/module/kvm/parameters/lapic_timer_advance_ns QEMU=$(type -P qemu-kvm || echo /usr/libexec/qemu-kvm) TSCDEADLINE_LATENCY="/usr/share/qemu-kvm/tscdeadline_latency.flat" if [ ! -f "$TSCDEADLINE_LATENCY" ]; then TSCDEADLINE_LATENCY="/usr/share/tuned/tscdeadline_latency.flat" fi run_tsc_deadline_latency() { dir=`mktemp -d` for i in `seq 1000 500 7000`; do echo $i > $KVM_LAPIC_FILE chrt -f 1 taskset -c $1 $QEMU -enable-kvm -device pc-testdev \ -device isa-debug-exit,iobase=0xf4,iosize=0x4 \ -display none -serial stdio -device pci-testdev \ -kernel "$TSCDEADLINE_LATENCY" \ -cpu host | grep latency | cut -f 2 -d ":" > $dir/out if [ ! -f $dir/out ]; then die running $TSCDEADLINE_LATENCY failed fi tmp=$(wc -l $dir/out | awk '{ print $1 }') if [ $tmp -eq 0 ]; then die running $TSCDEADLINE_LATENCY failed fi A=0 while read l; do A=$(($A+$l)) done < $dir/out lines=`wc -l $dir/out | cut -f 1 -d " "` ans=$(($A/$lines)) echo $i: $ans done } start() { setup_kvm_mod_low_latency disable_ksm # If CPU model has changed, clean the cache if [ -f $CACHE_CPU_FILE ]; then curmodel=`cat /proc/cpuinfo | grep "model name" | cut -f 2 -d ":" | uniq` if [ -z "$curmodel" ]; then die failed to read CPU model fi genmodel=`cat $CACHE_CPU_FILE` if [ "$curmodel" != "$genmodel" ]; then rm -f $CACHE_VALUE_FILE rm -f $CACHE_CPU_FILE fi fi # If the cache is empty, find the best lapic_timer_advance_ns value # and cache it if [ ! -f $KVM_LAPIC_FILE ]; then die $KVM_LAPIC_FILE not found fi if [ ! -f $CACHE_VALUE_FILE ]; then if [ -f "$TSCDEADLINE_LATENCY" ]; then tempdir=`mktemp -d` isolatedcpu=`echo "$TUNED_isolated_cores_expanded" | cut -f 1 -d ","` run_tsc_deadline_latency $isolatedcpu > $tempdir/lat.out if ! ./find-lapictscdeadline-optimal.sh $tempdir/lat.out > $tempdir/opt.out; then die could not find optimal latency fi echo `cat $tempdir/opt.out | cut -f 2 -d ":"` > $CACHE_VALUE_FILE curmodel=`cat /proc/cpuinfo | grep "model name" | cut -f 2 -d ":" | uniq` echo "$curmodel" > $CACHE_CPU_FILE fi fi if [ -f $CACHE_VALUE_FILE ]; then echo `cat $CACHE_VALUE_FILE` > $KVM_LAPIC_FILE fi return 0 } stop() { [ "$1" = "full_rollback" ] && teardown_kvm_mod_low_latency enable_ksm return "$?" } verify() { if [ -f /sys/module/kvm/parameters/kvmclock_periodic_sync ]; then test "$(cat /sys/module/kvm/parameters/kvmclock_periodic_sync)" = 0 retval=$? fi if [ $retval -eq 0 -a -f /sys/module/kvm_intel/parameters/ple_gap ]; then test "$(cat /sys/module/kvm_intel/parameters/ple_gap)" = 0 retval=$? fi return $retval } process $@ tuned-2.10.0/profiles/realtime-virtual-host/tuned.conf000066400000000000000000000030541331721725100227670ustar00rootroot00000000000000# # tuned configuration # # Dependencies: # # - tuna # - awk # - wc [main] summary=Optimize for KVM guests running realtime workloads include=realtime [variables] # User is responsible for adding isolated_cores=X-Y to realtime-virtual-host-variables.conf include=/etc/tuned/realtime-virtual-host-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cores_present_expanded=${f:cpulist_present:${isolated_cores}} # Fail if isolated_cores contains CPUs which are not present assert2=${f:assertion:isolated_cores contains present CPU(s):${isolated_cores_expanded}:${isolated_cores_present_expanded}} [scheduler] # group.group_name=rule_priority:scheduler_policy:scheduler_priority:core_affinity_in_hex:process_name_regex # for i in `pgrep ksoftirqd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ksoftirqd=0:f:2:*:ksoftirqd.* # for i in `pgrep rcuc` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcuc=0:f:4:*:rcuc.* # for i in `pgrep rcub` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcub=0:f:4:*:rcub.* # for i in `pgrep ktimersoftd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ktimersoftd=0:f:3:*:ktimersoftd.* ps_blacklist=ksoftirqd.*;rcuc.*;rcub.*;ktimersoftd.* [script] script=${i:PROFILE_DIR}/script.sh [bootloader] cmdline_rvh=+nohz=on nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} tuned-2.10.0/profiles/realtime/000077500000000000000000000000001331721725100163405ustar00rootroot00000000000000tuned-2.10.0/profiles/realtime/realtime-variables.conf000066400000000000000000000000731331721725100227570ustar00rootroot00000000000000# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # tuned-2.10.0/profiles/realtime/script.sh000077500000000000000000000002521331721725100202020ustar00rootroot00000000000000#!/bin/sh . /usr/lib/tuned/functions start() { return 0 } stop() { return 0 } verify() { tuna -c "$TUNED_isolated_cores" -P return "$?" } process $@ tuned-2.10.0/profiles/realtime/tuned.conf000066400000000000000000000027471331721725100203400ustar00rootroot00000000000000# tuned configuration # # Red Hat Enterprise Linux for Real Time Documentation: # https://docs.redhat.com [main] summary=Optimize for realtime workloads include = network-latency [variables] # User is responsible for updating variables.conf with variable content such as isolated_cores=X-Y include = /etc/tuned/realtime-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} # Non-isolated cores cpumask including offline cores not_isolated_cpumask = ${f:cpulist2hex_invert:${isolated_cores}} isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cores_present_expanded=${f:cpulist_present:${isolated_cores}} # Fail if isolated_cores contains CPUs which are not present assert2=${f:assertion:isolated_cores contains present CPU(s):${isolated_cores_expanded}:${isolated_cores_present_expanded}} [sysctl] kernel.hung_task_timeout_secs = 600 kernel.nmi_watchdog = 0 kernel.sched_rt_runtime_us = -1 vm.stat_interval = 10 kernel.timer_migration = 0 [sysfs] /sys/bus/workqueue/devices/writeback/cpumask = ${not_isolated_cpumask} /sys/devices/virtual/workqueue/cpumask = ${not_isolated_cpumask} /sys/devices/system/machinecheck/machinecheck*/ignore_ce = 1 [bootloader] cmdline_realtime=+isolcpus=${isolated_cores} intel_pstate=disable nosoftlockup [script] script = ${i:PROFILE_DIR}/script.sh [scheduler] isolated_cores=${isolated_cores} tuned-2.10.0/profiles/sap-hana-vmware/000077500000000000000000000000001331721725100175255ustar00rootroot00000000000000tuned-2.10.0/profiles/sap-hana-vmware/sap-hana-vmware-variables.conf000066400000000000000000000001571331721725100253340ustar00rootroot00000000000000# Set this to the network interfaces which should have # large receive offload disabled sap_hana_vmware_nic=!* tuned-2.10.0/profiles/sap-hana-vmware/tuned.conf000066400000000000000000000007401331721725100215140ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for SAP HANA running inside a VMware guest include=throughput-performance [variables] # User is responsible for setting sap_hana_vmware_nic # in sap-hana-vmware-variables.conf include=/etc/tuned/sap-hana-vmware-variables.conf [cpu] force_latency=70 [vm] transparent_hugepages=never [sysctl] vm.swappiness = 30 kernel.sem = 1250 256000 100 8192 kernel.numa_balancing = 0 [net] devices=${sap_hana_vmware_nic} features=lro off tuned-2.10.0/profiles/sap-hana/000077500000000000000000000000001331721725100162265ustar00rootroot00000000000000tuned-2.10.0/profiles/sap-hana/tuned.conf000066400000000000000000000003371331721725100202170ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for SAP HANA include=throughput-performance [cpu] force_latency=70 [vm] transparent_hugepages=never [sysctl] kernel.sem = 1250 256000 100 8192 kernel.numa_balancing = 0 tuned-2.10.0/profiles/sap-netweaver/000077500000000000000000000000001331721725100173175ustar00rootroot00000000000000tuned-2.10.0/profiles/sap-netweaver/script.sh000077500000000000000000000023631331721725100211660ustar00rootroot00000000000000#!/bin/bash # SAP ktune script # # TODO: drop this script and implement native support # into plugins . /usr/lib/tuned/functions start() { # The following lines are for autodetection of SAP settings SAP_MAIN_MEMORY_TOTAL=`awk '/MemTotal:/ {print $2}' /proc/meminfo` SAP_SWAP_SPACE_TOTAL=`awk '/SwapTotal:/ {print $2}' /proc/meminfo` # Rounding to full Gigabytes SAP_VIRT_MEMORY_TOTAL=$(( ( $SAP_MAIN_MEMORY_TOTAL + $SAP_SWAP_SPACE_TOTAL + 1048576 ) / 1048576 )) # kernel.shmall is in 4 KB pages; minimum 20 GB (SAP Note 941735) SAP_SHMALL=$(( $SAP_VIRT_MEMORY_TOTAL * 1024 * 1024 / 4 )) # kernel.shmmax is in Bytes; minimum 20 GB (SAP Note 941735) SAP_SHMMAX=$(( $SAP_VIRT_MEMORY_TOTAL * 1024 * 1024 * 1024 )) CURR_SHMALL=`sysctl -n kernel.shmall` CURR_SHMMAX=`sysctl -n kernel.shmmax` save_value kernel.shmall "$CURR_SHMALL" save_value kernel.shmmax "$CURR_SHMMAX" (( $SAP_SHMALL > $CURR_SHMALL )) && sysctl -w kernel.shmall="$SAP_SHMALL" (( $SAP_SHMMAX > $CURR_SHMMAX )) && sysctl -w kernel.shmmax="$SAP_SHMMAX" return 0 } stop() { SHMALL=`restore_value kernel.shmall` SHMMAX=`restore_value kernel.shmmax` [ "$SHMALL" ] && sysctl -w kernel.shmall="$SHMALL" [ "$SHMMAX" ] && sysctl -w kernel.shmmax="$SHMMAX" return 0 } process $@ tuned-2.10.0/profiles/sap-netweaver/tuned.conf000066400000000000000000000003271331721725100213070ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for SAP NetWeaver include=throughput-performance [sysctl] kernel.sem = 1250 256000 100 8192 vm.max_map_count = 2000000 [script] script=${i:PROFILE_DIR}/script.sh tuned-2.10.0/profiles/server-powersave/000077500000000000000000000000001331721725100200555ustar00rootroot00000000000000tuned-2.10.0/profiles/server-powersave/tuned.conf000066400000000000000000000001671331721725100220470ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for server power savings [cpu] [disk] [scsi_host] alpm=min_power tuned-2.10.0/profiles/spindown-disk/000077500000000000000000000000001331721725100173275ustar00rootroot00000000000000tuned-2.10.0/profiles/spindown-disk/script.sh000077500000000000000000000010461331721725100211730ustar00rootroot00000000000000#!/bin/sh . /usr/lib/tuned/functions EXT_PARTITIONS=$(mount | grep -E "type ext(3|4)" | cut -d" " -f1) start() { [ "$USB_AUTOSUSPEND" = 1 ] && enable_usb_autosuspend disable_bluetooth enable_wifi_powersave disable_logs_syncing remount_partitions commit=600,noatime $EXT_PARTITIONS sync return 0 } stop() { [ "$USB_AUTOSUSPEND" = 1 ] && disable_usb_autosuspend enable_bluetooth disable_wifi_powersave restore_logs_syncing remount_partitions commit=5 $EXT_PARTITIONS return 0 } process $@ tuned-2.10.0/profiles/spindown-disk/tuned.conf000066400000000000000000000015621331721725100213210ustar00rootroot00000000000000# # tuned configuration # # spindown-disk usecase: # Safe extra energy on your laptop or home server # which wake-up only when you ssh to it. On server # could be hdparm and sysctl values problematic for # some type of discs. Laptops should be probably ok # with these numbers. # # Possible problems: # The script is remounting your ext3 fs if you have # it as noatime. Also configuration of rsyslog is # changed to not sync. hdparm is setting disc to # minimal spins but without use of tuned daemon. # Bluetooth will be switch off. # Wifi will be switch into power safe mode. [main] summary=Optimize for power saving by spinning-down rotational disks [disk] apm=128 spindown=6 [scsi_host] alpm=medium_power [sysctl] vm.dirty_writeback_centisecs=6000 vm.dirty_expire_centisecs=9000 vm.dirty_ratio=60 vm.laptop_mode=5 vm.swappiness=30 [script] script=${i:PROFILE_DIR}/script.sh tuned-2.10.0/profiles/throughput-performance/000077500000000000000000000000001331721725100212465ustar00rootroot00000000000000tuned-2.10.0/profiles/throughput-performance/tuned.conf000066400000000000000000000044771331721725100232500ustar00rootroot00000000000000# # tuned configuration # [main] summary=Broadly applicable tuning that provides excellent performance across a variety of common server workloads [cpu] governor=performance energy_perf_bias=performance min_perf_pct=100 [disk] # The default unit for readahead is KiB. This can be adjusted to sectors # by specifying the relevant suffix, eg. (readahead => 8192 s). There must # be at least one space between the number and suffix (if suffix is specified). readahead=>4096 [sysctl] # ktune sysctl settings for rhel6 servers, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns = 10000000 # SCHED_OTHER wake-up granularity. # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) # # This option delays the preemption effects of decoupled workloads # and reduces their over-scheduling. Synchronous workloads will still # have immediate wakeup/sleep latencies. kernel.sched_wakeup_granularity_ns = 15000000 # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio = 40 # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio = 10 # PID allocation wrap value. When the kernel's next PID value # reaches this value, it wraps back to a minimum PID value. # PIDs of value pid_max or larger are not allocated. # # A suggested value for pid_max is 1024 * <# of cpu cores/threads in system> # e.g., a box with 32 cpus, the default of 32768 is reasonable, for 64 cpus, # 65536, for 4096 cpus, 4194304 (which is the upper limit possible). #kernel.pid_max = 65536 # The swappiness parameter controls the tendency of the kernel to move # processes out of physical memory and onto the swap disk. # 0 tells the kernel to avoid swapping processes out of physical memory # for as long as possible # 100 tells the kernel to aggressively swap processes out of physical memory # and move them to swap cache vm.swappiness=10 tuned-2.10.0/profiles/virtual-guest/000077500000000000000000000000001331721725100173515ustar00rootroot00000000000000tuned-2.10.0/profiles/virtual-guest/tuned.conf000066400000000000000000000013551331721725100213430ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for running inside a virtual guest include=throughput-performance [sysctl] # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio = 30 # Filesystem I/O is usually much more efficient than swapping, so try to keep # swapping low. It's usually safe to go even lower than this on systems with # server-grade storage. vm.swappiness = 30 tuned-2.10.0/profiles/virtual-host/000077500000000000000000000000001331721725100171775ustar00rootroot00000000000000tuned-2.10.0/profiles/virtual-host/tuned.conf000066400000000000000000000006741331721725100211740ustar00rootroot00000000000000# # tuned configuration # [main] summary=Optimize for running KVM guests include=throughput-performance [sysctl] # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio = 5 # The total time the scheduler will consider a migrated process # "cache hot" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns = 5000000 tuned-2.10.0/recommend.conf000066400000000000000000000030561331721725100155370ustar00rootroot00000000000000# Tuned rules for recommend_profile. # # Syntax: # [PROFILE1] # KEYWORD11=RE11 # KEYWORD21=RE12 # # [PROFILE2] # KEYWORD21=RE21 # KEYWORD22=RE22 # KEYWORD can be: # virt - for RE to match output of virt-what # system - for RE to match content of /etc/system-release-cpe # process - for RE to match running processes. It can have arbitrary suffix, all # process* lines have to match for the PROFILE to match (i.e. the AND operator) # /FILE - for RE to match content of the FILE, e.g.: '/etc/passwd=.+'. If file doesn't # exist, its RE will not match. # All REs for all KEYWORDs have to match for PROFILE to match (i.e. the AND operator). # If 'virt' or 'system' is not specified, it matches for every string. # If 'virt' or 'system' is empty, i.e. 'virt=', it matches only empty string (alias for '^$'). # If several profiles matched, the first match is taken. # # Limitation: # Each profile can be specified only once, because there cannot be # multiple sections in the configuration file with the same name # (ConfigObj limitation). # If there is a need to specify the profile multiple times, unique # suffix like ',ANYSTRING' can be used. Everything after the last ',' # is stripped by the parser, e.g.: # # [balanced,1] # /FILE1=RE1 # # [balanced,2] # /FILE2=RE2 # # This will set 'balanced' profile in case there is FILE1 matching RE1 or # FILE2 matching RE2 or both. [atomic-host] virt= system=.*atomic.* [atomic-guest] virt=.+ system=.*atomic.* [throughput-performance] virt= system=.*(computenode|server).* [virtual-guest] virt=.+ [balanced] tuned-2.10.0/systemtap/000077500000000000000000000000001331721725100147445ustar00rootroot00000000000000tuned-2.10.0/systemtap/diskdevstat000077500000000000000000000102231331721725100172150ustar00rootroot00000000000000#!/usr/bin/stap # # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Phil Knirsch # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # global ifavg, iflast, rtime, interval, duration, histogram function help() { print( "A simple systemtap script to record harddisk activity of processes and display\n" "statistics for read/write operations.\n\n" "usage: diskdevstat [update-interval] [total-duration] [display-histogram]\n\n" " update-interval in seconds, the default is 5\n" " total-duration in seconds, the default is 86400\n" " display-histogram anything, unset by default\n" ); } probe begin { rtime = 0; interval = 5; duration = 86400; histogram = 0; %( $# > 0 %? interval = strtol(@1,10); %) %( $# > 1 %? duration = strtol(@2,10); %) %( $# > 2 %? histogram = 1; %) if ($# >= 4 || interval <= 0 || duration <= 0) { help(); exit(); } } probe vfs.write { if(pid() == 0 || devname == "N/A") { next; } ms = gettimeofday_ms(); if(iflast[0, pid(), devname, execname(), uid()] == 0) { iflast[0, pid(), devname, execname(), uid()] = ms; } else { diff = ms - iflast[0, pid(), devname, execname(), uid()]; iflast[0, pid(), devname, execname(), uid()] = ms; ifavg[0, pid(), devname, execname(), uid()] <<< diff; } } probe vfs.read { if(pid() == 0 || devname == "N/A") { next; } ms = gettimeofday_ms(); if(iflast[1, pid(), devname, execname(), uid()] == 0) { iflast[1, pid(), devname, execname(), uid()] = ms; } else { diff = ms - iflast[1, pid(), devname, execname(), uid()]; iflast[1, pid(), devname, execname(), uid()] = ms; ifavg[1, pid(), devname, execname(), uid()] <<< diff; } } function print_activity() { printf("\033[2J\033[1;1H") printf("%5s %5s %-7s %9s %9s %9s %9s %9s %9s %9s %9s %-15s\n", "PID", "UID", "DEV", "WRITE_CNT", "WRITE_MIN", "WRITE_MAX", "WRITE_AVG", "READ_CNT", "READ_MIN", "READ_MAX", "READ_AVG", "COMMAND") foreach ([type, pid, dev, exec, uid] in ifavg-) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) write_min = nxmit ? @min(ifavg[0, pid, dev, exec, uid]) : 0 write_max = nxmit ? @max(ifavg[0, pid, dev, exec, uid]) : 0 write_avg = nxmit ? @avg(ifavg[0, pid, dev, exec, uid]) : 0 read_min = nrecv ? @min(ifavg[1, pid, dev, exec, uid]) : 0 read_max = nrecv ? @max(ifavg[1, pid, dev, exec, uid]) : 0 read_avg = nrecv ? @avg(ifavg[1, pid, dev, exec, uid]) : 0 if(type == 0 || nxmit == 0) { printf("%5d %5d %-7s %9d %5d.%03d %5d.%03d %5d.%03d", pid, uid, dev, nxmit, write_min/1000, write_min%1000, write_max/1000, write_max%1000, write_avg/1000, write_avg%1000) printf(" %9d %5d.%03d %5d.%03d %5d.%03d %-15s\n", nrecv, read_min/1000, read_min%1000, read_max/1000, read_max%1000, read_avg/1000, read_avg%1000, exec) } } print("\n") } function print_histogram() { foreach ([type, pid, dev, exec, uid] in ifavg-) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) if (type == 0 || nxmit == 0) { printf("%5d %5d %-7s %-15s\n", pid, uid, dev, exec) if (nxmit > 0) { printf(" WRITE histogram\n") print(@hist_log(ifavg[0, pid, dev, exec, uid])) } if (nrecv > 0) { printf(" READ histogram\n") print(@hist_log(ifavg[1, pid, dev, exec, uid])) } } } } probe timer.s(1) { rtime = rtime + 1; if (rtime % interval == 0) { print_activity() } if (rtime >= duration) { exit(); } } probe end, error { if (histogram == 1) { print_histogram(); } exit(); } tuned-2.10.0/systemtap/netdevstat000077500000000000000000000101651331721725100170560ustar00rootroot00000000000000#!/usr/bin/stap # # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Phil Knirsch # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # global ifavg, iflast, rtime, interval, duration, histogram function help() { print( "A simple systemtap script to record network activity of processes and display\n" "statistics for transmit/receive operations.\n\n" "usage: netdevstat [update-interval] [total-duration] [display-histogram]\n\n" " update-interval in seconds, the default is 5\n" " total-duration in seconds, the default is 86400\n" " display-histogram anything, unset by default\n" ); } probe begin { rtime = 0; interval = 5; duration = 86400; histogram = 0; %( $# > 0 %? interval = strtol(@1,10); %) %( $# > 1 %? duration = strtol(@2,10); %) %( $# > 2 %? histogram = 1; %) if ($# >= 4 || interval <= 0 || duration <= 0) { help(); exit(); } } probe netdev.transmit { if(pid() == 0) { next; } ms = gettimeofday_ms(); if(iflast[0, pid(), dev_name, execname(), uid()] == 0) { iflast[0, pid(), dev_name, execname(), uid()] = ms; } else { diff = ms - iflast[0, pid(), dev_name, execname(), uid()]; iflast[0, pid(), dev_name, execname(), uid()] = ms; ifavg[0, pid(), dev_name, execname(), uid()] <<< diff; } } probe netdev.receive { if(pid() == 0) { next; } ms = gettimeofday_ms(); if(iflast[1, pid(), dev_name, execname(), uid()] == 0) { iflast[1, pid(), dev_name, execname(), uid()] = ms; } else { diff = ms - iflast[1, pid(), dev_name, execname(), uid()]; iflast[1, pid(), dev_name, execname(), uid()] = ms; ifavg[1, pid(), dev_name, execname(), uid()] <<< diff; } } function print_activity() { printf("\033[2J\033[1;1H") printf("%5s %5s %-7s %9s %9s %9s %9s %9s %9s %9s %9s %-15s\n", "PID", "UID", "DEV", "XMIT_CNT", "XMIT_MIN", "XMIT_MAX", "XMIT_AVG", "RECV_CNT", "RECV_MIN", "RECV_MAX", "RECV_AVG", "COMMAND") foreach ([type, pid, dev, exec, uid] in ifavg-) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) xmit_min = nxmit ? @min(ifavg[0, pid, dev, exec, uid]) : 0 xmit_max = nxmit ? @max(ifavg[0, pid, dev, exec, uid]) : 0 xmit_avg = nxmit ? @avg(ifavg[0, pid, dev, exec, uid]) : 0 recv_min = nrecv ? @min(ifavg[1, pid, dev, exec, uid]) : 0 recv_max = nrecv ? @max(ifavg[1, pid, dev, exec, uid]) : 0 recv_avg = nrecv ? @avg(ifavg[1, pid, dev, exec, uid]) : 0 if(type == 0 || nxmit == 0) { printf("%5d %5d %-7s %9d %5d.%03d %5d.%03d %5d.%03d ", pid, uid, dev, nxmit, xmit_min/1000, xmit_min%1000, xmit_max/1000, xmit_max%1000, xmit_avg/1000, xmit_avg%1000) printf("%9d %5d.%03d %5d.%03d %5d.%03d %-15s\n", nrecv, recv_min/1000, recv_min%1000, recv_max/1000, recv_max%1000, recv_avg/1000, recv_avg%1000, exec) } } print("\n") } function print_histogram() { foreach ([type+, pid, dev, exec, uid] in ifavg) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) if (type == 0 || nxmit == 0) { printf("%5d %5d %-7s %-15s\n", pid, uid, dev, exec) if (nxmit > 0) { printf(" WRITE histogram\n") print(@hist_log(ifavg[0, pid, dev, exec, uid])) } if (nrecv > 0) { printf(" READ histogram\n") print(@hist_log(ifavg[1, pid, dev, exec, uid])) } } } } probe timer.s(1) { rtime = rtime + 1; if (rtime % interval == 0) { print_activity() } if (rtime >= duration) { exit(); } } probe end, error { if (histogram == 1) { print_histogram(); } exit(); } tuned-2.10.0/systemtap/scomes000077500000000000000000000154601331721725100161710ustar00rootroot00000000000000#!/usr/bin/stap # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # # =================================================================== # Do this when we have started global report_period = 0 # For watch_forked variable: # 0 = watch only main thread # 1 = watch forked with same execname as well # 2 = watch all forked processes global watch_forked = 2 probe begin { if ($# == 1) { report_period = $1 } print("Collecting data...\n") printf("... for pid %d - %s\n", target(), pid2execname(target())) } # =================================================================== # Define helper function for printing results function compute_score() { # new empirical formula that was proposed by Vratislav Podzimek in his bachelor thesis return syscalls + (poll_timeout + epoll_timeout + select_timeout + itimer_timeout + nanosleep_timeout + futex_timeout + signal_timeout) * 4 + (reads + writes) / 5000 + (ifxmit + ifrecv) * 25 } function print_status() { printf("-----------------------------------\n") ###target_set_report() printf("Monitored process: %d (%s)\n", target(), pid2execname(target())) printf("Number of syscalls: %d\n", syscalls) printf("Kernel/Userspace ticks: %d/%d (%d)\n", kticks, uticks, kticks+uticks) printf("Read/Written bytes (from/to devices): %d/%d (%d)\n", reads, writes, reads+writes) printf("Read/Written bytes (from/to N/A device): %d/%d (%d)\n", reads_c, writes_c, reads_c+writes_c) printf("Transmitted/Recived bytes: %d/%d (%d)\n", ifxmit, ifrecv, ifxmit+ifrecv) printf("Polling syscalls: %d\n", poll_timeout+epoll_timeout+select_timeout+itimer_timeout+nanosleep_timeout+futex_timeout+signal_timeout) printf("SCORE: %d\n", compute_score()) } # =================================================================== # Define helper function for comparing if this is relevant pid # and for watching if our watched pid forked # ... from http://sourceware.org/systemtap/wiki/systemtapstarters global PIDS = 1 # as target() is already running function is_watched(p) { if ( (watch_forked == 0 && p == target()) || (watch_forked == 1 && target_set_pid(p) && pid2execname(target()) == pid2execname(p)) || (watch_forked == 2 && target_set_pid(p)) ) { #printf("Process %d is relevant to process %d\n", p, target()) return 1 # yes, we are watching this pid } else { return 0 # no, we are not watching this pid } } # Add a relevant forked process to the list of watched processes probe kernel.function("do_fork") { #printf("Fork of %d (%s) detected\n", pid(), execname()) if (is_watched(pid())) { #printf("Proces %d forked\n", pid()) PIDS = PIDS + 1 #printf("Currently watching %d pids (1 just added)\n", PIDS) } } # Remove pid from the list of watched pids and print report when # all relevant processes ends probe syscall.exit { if (is_watched(pid())) { #printf("Removing process %d\n", pid()) PIDS = PIDS - 1 } #printf("Currently watching %d pids (1 just removed)\n", PIDS) if (PIDS == 0) { printf("-----------------------------------\n") printf("LAST RESULTS:\n") print_status() exit() } } # =================================================================== # Check all syscalls # ... from syscalls_by_pid.stp global syscalls probe syscall.* { if (is_watched(pid())) { syscalls++ #printf ("%s(%d) syscall %s\n", execname(), pid(), name) } } # =================================================================== # Check read/written bytes # ... from disktop.stp global reads, writes, reads_c, writes_c probe vfs.read.return { if (is_watched(pid()) && $return>0) { if (devname!="N/A") { reads += $return } else { reads_c += $return } } } probe vfs.write.return { if (is_watched(pid()) && $return>0) { if (devname!="N/A") { writes += $return } else { writes_c += $return } } } # =================================================================== # Check kernel and userspace CPU ticks # ... from thread-times.stp global kticks, uticks probe timer.profile { if (is_watched(pid())) { if (!user_mode()) kticks++ else uticks++ } } # =================================================================== # Check polling # ... from timeout.stp global poll_timeout, epoll_timeout, select_timeout, itimer_timeout global nanosleep_timeout, futex_timeout, signal_timeout global to probe syscall.poll, syscall.epoll_wait { if (timeout) to[pid()]=timeout } probe syscall.poll.return { if ($return == 0 && is_watched(pid()) && to[pid()] > 0) { poll_timeout++ delete to[pid()] } } probe syscall.epoll_wait.return { if ($return == 0 && is_watched(pid()) && to[pid()] > 0) { epoll_timeout++ delete to[pid()] } } probe syscall.select.return { if ($return == 0 && is_watched(pid())) { select_timeout++ } } probe syscall.futex.return { if ($return == 0 && is_watched(pid()) && errno_str($return) == "ETIMEDOUT") { futex_timeout++ } } probe syscall.nanosleep.return { if ($return == 0 && is_watched(pid())) { nanosleep_timeout++ } } probe kernel.function("it_real_fn") { if (is_watched(pid())) { itimer_timeout++ } } probe syscall.rt_sigtimedwait.return { if (is_watched(pid()) && errno_str($return) == "EAGAIN") { signal_timeout++ } } # =================================================================== # Check network traffic # ... from nettop.stp global ifxmit, ifrecv probe netdev.transmit { if (is_watched(pid()) && dev_name!="lo") { ifxmit += length } } probe netdev.receive { if (is_watched(pid()) && dev_name!="lo") { ifrecv += length } } # =================================================================== # Print report each X seconds global counter probe timer.s(1) { if (report_period != 0) { counter++ if (counter == report_period) { print_status() counter = 0 } } } # =================================================================== # Print quit message probe end { printf("-----------------------------------\n") printf("LAST RESULTS:\n") print_status() printf("-----------------------------------\n") printf("QUITTING\n") printf("-----------------------------------\n") } tuned-2.10.0/systemtap/varnetload000077500000000000000000000055351331721725100170410ustar00rootroot00000000000000#!/usr/bin/python -Es # # varnetload: A python script to create reproducable sustained network traffic # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Phil Knirsch # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # # # Usage: varnetload [-d delay in milliseconds] [-t runtime in seconds] [-u URL] # # Howto use it: # - In order to effectively use it you need to have a http server in your local # LAN where you can put files. # - Upload a large HTML (or any other kind of file) to the http server. # - Use the -u option of the script to point to that URL # - Play with the delay option to vary the load put on your network. Typical # useful values range from 0 to 500. # from __future__ import print_function import time, getopt, sys # exception handler for python 2/3 compatibility try: from urllib.request import * from urllib.error import * except ImportError: from urllib2 import * def usage(): print("Usage: varnetload [-d delay in milliseconds] [-t runtime in seconds] [-u URL]") delay = 1000.0 rtime = 60.0 url = "http://myhost.mydomain/index.html" try: opts, args = getopt.getopt(sys.argv[1:], "d:t:u:") except getopt.error as e: print("Error parsing command-line arguments: %s" % e) usage() sys.exit(1) for (opt, val) in opts: if opt == '-d': delay = float(val) elif opt == '-t': rtime = float(val) elif opt == '-u': url = val else: print("Unknown option: %s" % opt) usage() sys.exit(1) endtime = time.time() + rtime delay = float(delay)/1000.0 try: count = 0 while(time.time() < endtime): if (delay < 0.01): urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) count += 9 urlopen(url).read(409600) time.sleep(delay) count += 1 except URLError as e: print("Downloading failed: %s" % e.reason) sys.exit(2) print("Finished varnetload. Received %d pages in %d seconds" % (count, rtime)) tuned-2.10.0/tests/000077500000000000000000000000001331721725100140555ustar00rootroot00000000000000tuned-2.10.0/tests/__init__.py000066400000000000000000000000001331721725100161540ustar00rootroot00000000000000tuned-2.10.0/tests/func_basic.sh000077500000000000000000000003471331721725100165140ustar00rootroot00000000000000#!/bin/bash systemctl restart tuned tuned-adm recommend PROFILES=`tuned-adm list | sed -n '/^\-/ s/^- // p'` for p in $PROFILES do tuned-adm profile "$p" sleep 5 tuned-adm active done tuned-adm profile `tuned-adm recommend` tuned-2.10.0/tests/globals.py000066400000000000000000000003221331721725100160470ustar00rootroot00000000000000import flexmock import logging import tuned.logs logger = logging.getLogger() handler = logging.NullHandler() logger.addHandler(handler) flexmock.flexmock(tuned.logs).should_receive("get").and_return(logger) tuned-2.10.0/tests/hardware/000077500000000000000000000000001331721725100156525ustar00rootroot00000000000000tuned-2.10.0/tests/hardware/__init__.py000066400000000000000000000000001331721725100177510ustar00rootroot00000000000000tuned-2.10.0/tests/hardware/test_device_matcher.py000066400000000000000000000023711331721725100222300ustar00rootroot00000000000000import unittest from tuned.hardware.device_matcher import DeviceMatcher class DeviceMatcherTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.matcher = DeviceMatcher() def test_one_positive_rule(self): self.assertTrue(self.matcher.match("sd*", "sda")) self.assertFalse(self.matcher.match("sd*", "hda")) def test_multiple_positive_rules(self): self.assertTrue(self.matcher.match("sd* hd*", "sda")) self.assertTrue(self.matcher.match("sd* hd*", "hda")) self.assertFalse(self.matcher.match("sd* hd*", "dm-0")) def test_implicit_positive(self): self.assertTrue(self.matcher.match("", "sda")) self.assertTrue(self.matcher.match("!sd*", "hda")) self.assertFalse(self.matcher.match("!sd*", "sda")) def test_positve_negative_combination(self): self.assertTrue(self.matcher.match("sd* !sdb", "sda")) self.assertFalse(self.matcher.match("sd* !sdb", "sdb")) def test_positive_first(self): self.assertTrue(self.matcher.match("!sdb sd*", "sda")) self.assertFalse(self.matcher.match("!sdb sd*", "sdb")) def test_match_list(self): devices = ["sda", "sdb", "sdc"] self.assertListEqual(self.matcher.match_list("sd* !sdb", devices), ["sda", "sdc"]) self.assertListEqual(self.matcher.match_list("!sda", devices), ["sdb", "sdc"]) tuned-2.10.0/tests/monitors/000077500000000000000000000000001331721725100157275ustar00rootroot00000000000000tuned-2.10.0/tests/monitors/__init__.py000066400000000000000000000000001331721725100200260ustar00rootroot00000000000000tuned-2.10.0/tests/monitors/test_base.py000066400000000000000000000055161331721725100202610ustar00rootroot00000000000000import unittest import tests.globals import tuned.monitors.base class MockMonitor(tuned.monitors.base.Monitor): @classmethod def _init_available_devices(cls): cls._available_devices = set(["a", "b"]) @classmethod def update(cls): for device in ["a", "b"]: cls._load.setdefault(device, 0) cls._load[device] += 1 class MonitorBaseClassTestCase(unittest.TestCase): def test_fail_base_class_init(self): with self.assertRaises(NotImplementedError): tuned.monitors.base.Monitor() def test_update_fail_with_base_class(self): with self.assertRaises(NotImplementedError): tuned.monitors.base.Monitor.update() def test_available_devices(self): monitor = MockMonitor() devices = MockMonitor.get_available_devices() self.assertEqual(devices, set(["a", "b"])) monitor.cleanup() def test_registering_instances(self): monitor = MockMonitor() self.assertIn(monitor, MockMonitor.instances()) monitor.cleanup() self.assertNotIn(monitor, MockMonitor.instances()) def test_init_with_devices(self): monitor = MockMonitor() self.assertSetEqual(set(["a", "b"]), monitor.devices) monitor.cleanup() monitor = MockMonitor(["a"]) self.assertSetEqual(set(["a"]), monitor.devices) monitor.cleanup() monitor = MockMonitor([]) self.assertSetEqual(set(), monitor.devices) monitor.cleanup() monitor = MockMonitor(["b", "x"]) self.assertSetEqual(set(["b"]), monitor.devices) monitor.cleanup() def test_add_device(self): monitor = MockMonitor(["a"]) self.assertSetEqual(set(["a"]), monitor.devices) monitor.add_device("x") self.assertSetEqual(set(["a"]), monitor.devices) monitor.add_device("b") self.assertSetEqual(set(["a", "b"]), monitor.devices) monitor.cleanup() def test_remove_device(self): monitor = MockMonitor() self.assertSetEqual(set(["a", "b"]), monitor.devices) monitor.remove_device("a") self.assertSetEqual(set(["b"]), monitor.devices) monitor.remove_device("x") self.assertSetEqual(set(["b"]), monitor.devices) monitor.remove_device("b") self.assertSetEqual(set(), monitor.devices) monitor.cleanup() def test_get_load_from_enabled(self): monitor = MockMonitor() load = monitor.get_load() self.assertIn("a", load) self.assertIn("b", load) monitor.remove_device("a") load = monitor.get_load() self.assertNotIn("a", load) self.assertIn("b", load) monitor.remove_device("b") load = monitor.get_load() self.assertDictEqual({}, load) monitor.cleanup() def test_refresh_of_updating_devices(self): monitor1 = MockMonitor(["a"]) self.assertSetEqual(set(["a"]), MockMonitor._updating_devices) monitor2 = MockMonitor(["a", "b"]) self.assertSetEqual(set(["a", "b"]), MockMonitor._updating_devices) monitor1.cleanup() self.assertSetEqual(set(["a", "b"]), MockMonitor._updating_devices) monitor2.cleanup() self.assertSetEqual(set(), MockMonitor._updating_devices) tuned-2.10.0/tests/plugins/000077500000000000000000000000001331721725100155365ustar00rootroot00000000000000tuned-2.10.0/tests/plugins/__init__.py000066400000000000000000000000001331721725100176350ustar00rootroot00000000000000tuned-2.10.0/tests/plugins/test_base.py000066400000000000000000000110331331721725100200570ustar00rootroot00000000000000import unittest import tests.globals from flexmock import flexmock from tuned.plugins.base import Plugin as PluginBase import tuned.plugins.decorators class MockPlugin(PluginBase): @classmethod def _get_default_options(cls): return { 'color': 'blue', 'size': 'XXL' } class InvalidCommandPlugin(MockPlugin): @tuned.plugins.decorators.command_set('color') def _set_color(self, new_color): pass class CommandPlugin(MockPlugin): @classmethod def tunable_devices(cls): return ['a', 'b'] def _post_init(self): self._size = 'M' self._color = { 'a': 'green', 'b': 'pink' } @tuned.plugins.decorators.command_set('size') def _set_size(self, new_size): self._size = new_size @tuned.plugins.decorators.command_get('size') def _get_size(self): return self._size @tuned.plugins.decorators.command_set('color', per_device=True) def _set_color(self, device, new_color): self._color[device] = new_color @tuned.plugins.decorators.command_get('color') def _get_color(self, device): return self._color[device] class PluginBaseClassTestCase(unittest.TestCase): def setUp(self): self.storage_factory = flexmock(create = lambda name: None) self.monitor_repository = None self.plugin = MockPlugin(self.monitor_repository, self.storage_factory) def test_init(self): self.storage_factory.should_receive('create').and_return(None).times(2) plugin = MockPlugin(self.monitor_repository, self.storage_factory, None, None) plugin = MockPlugin(self.monitor_repository, self.storage_factory) def test_cleanup(self): self.plugin.cleanup() def test_update_tuning_not_implemented(self): with self.assertRaises(NotImplementedError): self.plugin.update_tuning() def test_class_properties(self): self.assertIs(PluginBase.tunable_devices(), None) self.assertTrue(PluginBase.is_supported()) def test_instance_properties(self): self.assertTrue(self.plugin.dynamic_tuning) def test_merge_unknown_options(self): plugin1 = PluginBase(self.monitor_repository, self.storage_factory, None, None) plugin2 = PluginBase(self.monitor_repository, self.storage_factory, None, {}) plugin3 = PluginBase(self.monitor_repository, self.storage_factory, None, {'unknown': 'test'}) self.assertDictEqual(plugin1._options, {}) self.assertDictEqual(plugin2._options, {}) self.assertDictEqual(plugin3._options, {}) def test_merge_known_options(self): plugin1 = MockPlugin(self.monitor_repository, self.storage_factory, None, None) plugin2 = MockPlugin(self.monitor_repository, self.storage_factory, None, {'color': 'red'}) plugin3 = MockPlugin(self.monitor_repository, self.storage_factory, None, {'size': 'S', 'fabric': 'cotton'}) self.assertDictEqual(plugin1._options, {'size': 'XXL', 'color': 'blue'}) self.assertDictEqual(plugin2._options, {'size': 'XXL', 'color': 'red'}) self.assertDictEqual(plugin3._options, {'size': 'S', 'color': 'blue'}) def test_classs_with_invalid_commands(self): with self.assertRaises(TypeError): plugin = InvalidCommandPlugin(self.monitor_repository, self.storage_factory) def test_storage_with_device_independent_commands(self): storage = flexmock() storage_factory = flexmock() storage_factory.should_receive('create').and_return(storage) storage.should_receive('set').with_args('size', 'M').once.ordered storage.should_receive('get').with_args('size').and_return('M').once.ordered storage.should_receive('unset').with_args('size').once.ordered plugin = CommandPlugin(self.monitor_repository, storage_factory, ['b'], {'size': 'XXS', 'color': None}) plugin.execute_commands() plugin.cleanup_commands() def test_storage_with_per_device_commands(self): storage = flexmock() storage_factory = flexmock() storage_factory.should_receive('create').and_return(storage) storage.should_receive('set').with_args('color@b', 'pink').once.ordered storage.should_receive('get').with_args('color@b').and_return('pink').once.ordered storage.should_receive('unset').with_args('color@b').once.ordered plugin = CommandPlugin(self.monitor_repository, storage_factory, ['b'], {'size': None, 'color': 'white'}) plugin.execute_commands() plugin.cleanup_commands() def test_exception_with_per_device_commands_when_no_devices_specified(self): storage = flexmock(set=lambda key: None, get=lambda key, value: None, unset=lambda key: None) storage_factory = flexmock() storage_factory.should_receive('create').and_return(storage) plugin = CommandPlugin(self.monitor_repository, storage_factory) with self.assertRaises(TypeError): plugin.execute_commands() with self.assertRaises(TypeError): plugin.cleanup_commands() tuned-2.10.0/tests/profiles/000077500000000000000000000000001331721725100157005ustar00rootroot00000000000000tuned-2.10.0/tests/profiles/__init__.py000066400000000000000000000000001331721725100177770ustar00rootroot00000000000000tuned-2.10.0/tests/profiles/test_loader.py000066400000000000000000000067731331721725100205740ustar00rootroot00000000000000import unittest import tempfile import shutil import os.path import tuned.profiles.exceptions from tuned.profiles.loader import Loader from flexmock import flexmock class MockProfile(object): def __init__(self, name, config): self.name = name self.options = {} self.units = {} self.test_config = config class MockProfileFactory(object): def create(self, name, config): return MockProfile(name, config) class MockProfileMerger(object): def merge(self, profiles): new = MockProfile("merged", {}) new.test_merged = profiles return new class LoaderTestCase(unittest.TestCase): def setUp(self): self.factory = MockProfileFactory() self.merger = MockProfileMerger() self.loader = Loader(self._tmp_load_dirs, self.factory, self.merger) @classmethod def setUpClass(cls): tmpdir1 = tempfile.mkdtemp() tmpdir2 = tempfile.mkdtemp() cls._tmp_load_dirs = [tmpdir1, tmpdir2] cls._create_profile(tmpdir1, "default", "[main]\n\n[network]\ntype=net\ndevices=em*\n\n[disk]\nenabled=false\n") cls._create_profile(tmpdir1, "invalid", "INVALID") cls._create_profile(tmpdir1, "expand", "[expand]\ntype=script\nscript=runme.sh\n") cls._create_profile(tmpdir2, "empty", "") cls._create_profile(tmpdir1, "custom", "[custom]\ntype=one\n") cls._create_profile(tmpdir2, "custom", "[custom]\ntype=two\n") @classmethod def tearDownClass(cls): for tmp_dir in cls._tmp_load_dirs: shutil.rmtree(tmp_dir, True) @classmethod def _create_profile(cls, load_dir, profile_name, tuned_conf_content): profile_dir = os.path.join(load_dir, profile_name) conf_name = os.path.join(profile_dir, "tuned.conf") os.mkdir(profile_dir) with open(conf_name, "w") as conf_file: conf_file.write(tuned_conf_content) def test_init(self): Loader([], None, None) Loader(["/tmp"], None, None) Loader(["/foo", "/bar"], None, None) def test_init_wrong_type(self): with self.assertRaises(TypeError): Loader(False, self.factory, self.merger) def test_load(self): profile = self.loader.load("default") self.assertIn("main", profile.test_config) self.assertIn("disk", profile.test_config) self.assertEqual(profile.test_config["network"]["devices"], "em*") def test_load_empty(self): profile = self.loader.load("empty") self.assertDictEqual(profile.test_config, {}) def test_load_invalid(self): with self.assertRaises(tuned.profiles.exceptions.InvalidProfileException): invalid_config = self.loader.load("invalid") def test_load_nonexistent(self): with self.assertRaises(tuned.profiles.exceptions.InvalidProfileException): config = self.loader.load("nonexistent") def test_load_order(self): profile = self.loader.load("custom") self.assertEqual(profile.test_config["custom"]["type"], "two") def test_default_load(self): profile = self.loader.load("empty") self.assertIs(type(profile), MockProfile) def test_script_expand_names(self): profile = self.loader.load("expand") expected_name = os.path.join(self._tmp_load_dirs[0], "expand", "runme.sh") self.assertEqual(profile.test_config["expand"]["script"], expected_name) def test_load_multiple_profiles(self): profile = self.loader.load(["default", "expand"]) self.assertEqual(len(profile.test_merged), 2) def test_include_directive(self): profile1 = MockProfile("first", {}) profile1.options = {"include": "default"} profile2 = MockProfile("second", {}) flexmock(self.factory).should_receive("create").and_return(profile1).and_return(profile2).twice() profile = self.loader.load("empty") self.assertEqual(len(profile.test_merged), 2) tuned-2.10.0/tests/profiles/test_locator.py000066400000000000000000000041031331721725100207520ustar00rootroot00000000000000import unittest import os import shutil import tempfile from tuned.profiles.locator import Locator class LocatorTestCase(unittest.TestCase): def setUp(self): self.locator = Locator(self._tmp_load_dirs) @classmethod def setUpClass(cls): tmpdir1 = tempfile.mkdtemp() tmpdir2 = tempfile.mkdtemp() cls._tmp_load_dirs = [tmpdir1, tmpdir2] cls._create_profile(tmpdir1, "balanced") cls._create_profile(tmpdir1, "powersafe") cls._create_profile(tmpdir2, "custom") cls._create_profile(tmpdir2, "balanced") @classmethod def tearDownClass(cls): for tmp_dir in cls._tmp_load_dirs: shutil.rmtree(tmp_dir, True) @classmethod def _create_profile(cls, load_dir, profile_name): profile_dir = os.path.join(load_dir, profile_name) conf_name = os.path.join(profile_dir, "tuned.conf") os.mkdir(profile_dir) with open(conf_name, "w") as conf_file: pass def test_init(self): Locator([]) def test_init_invalid_type(self): with self.assertRaises(TypeError): Locator("string") def test_get_known_names(self): known = self.locator.get_known_names() self.assertListEqual(known, ["balanced", "custom", "powersafe"]) def test_get_config(self): config_name = self.locator.get_config("custom") self.assertEqual(config_name, os.path.join(self._tmp_load_dirs[1], "custom", "tuned.conf")) def test_get_config_priority(self): customized = self.locator.get_config("balanced") self.assertEqual(customized, os.path.join(self._tmp_load_dirs[1], "balanced", "tuned.conf")) system = self.locator.get_config("balanced", [customized]) self.assertEqual(system, os.path.join(self._tmp_load_dirs[0], "balanced", "tuned.conf")) none = self.locator.get_config("balanced", [customized, system]) self.assertIsNone(none) def test_ignore_nonexistent_dirs(self): locator = Locator([self._tmp_load_dirs[0], "/tmp/some-dir-which-does-not-exist-for-sure"]) balanced = locator.get_config("balanced") self.assertEqual(balanced, os.path.join(self._tmp_load_dirs[0], "balanced", "tuned.conf")) known = locator.get_known_names() self.assertListEqual(known, ["balanced", "powersafe"]) tuned-2.10.0/tests/profiles/test_merger.py000066400000000000000000000030501331721725100205700ustar00rootroot00000000000000import unittest from tuned.profiles.merger import Merger from collections import OrderedDict class MergerTestCase(unittest.TestCase): def test_merge_without_replace(self): merger = Merger() config1 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em0", "custom": "option"}), ]) config2 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em1" }), ]) config = merger.merge([config1, config2]) self.assertIn("main", config) self.assertIn("net", config) self.assertEqual(config["net"]["custom"], "option") self.assertEqual(config["net"]["devices"], "em1") def test_merge_with_replace(self): merger = Merger() config1 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em0", "custom": "option"}), ]) config2 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em1", "replace": True }), ]) config = merger.merge([config1, config2]) self.assertIn("main", config) self.assertIn("net", config) self.assertNotIn("custom", config["net"]) self.assertEqual(config["net"]["devices"], "em1") def test_merge_multiple_order(self): merger = Merger() config1 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em0" }) ]) config2 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em1" }) ]) config3 = OrderedDict([ ("main", OrderedDict()), ("net", { "devices": "em2" }) ]) config = merger.merge([config1, config2, config3]) self.assertIn("main", config) self.assertIn("net", config) self.assertEqual(config["net"]["devices"], "em2") tuned-2.10.0/tests/profiles/test_profile.py000066400000000000000000000032341331721725100207530ustar00rootroot00000000000000import unittest import tuned.profiles import collections class MockProfile(tuned.profiles.profile.Profile): def _create_unit(self, name, config): return (name, config) class ProfileTestCase(unittest.TestCase): def test_init(self): MockProfile("test", {}) def test_create_units(self): profile = MockProfile("test", { "main": { "anything": 10 }, "network" : { "type": "net", "devices": "*" }, "storage" : { "type": "disk" }, }) self.assertIs(type(profile.units), collections.OrderedDict) self.assertEqual(len(profile.units), 2) self.assertListEqual(sorted([name_config[0] for name_config in profile.units]), sorted(["network", "storage"])) def test_create_units_empty(self): profile = MockProfile("test", {"main":{}}) self.assertIs(type(profile.units), collections.OrderedDict) self.assertEqual(len(profile.units), 0) def test_sets_name(self): profile1 = MockProfile("test_one", {}) profile2 = MockProfile("test_two", {}) self.assertEqual(profile1.name, "test_one") self.assertEqual(profile2.name, "test_two") def test_change_name(self): profile = MockProfile("oldname", {}) self.assertEqual(profile.name, "oldname") profile.name = "newname" self.assertEqual(profile.name, "newname") def test_sets_options(self): profile = MockProfile("test", { "main": { "anything": 10 }, "network" : { "type": "net", "devices": "*" }, }) self.assertIs(type(profile.options), dict) self.assertEqual(profile.options["anything"], 10) def test_sets_options_empty(self): profile = MockProfile("test", { "storage" : { "type": "disk" }, }) self.assertIs(type(profile.options), dict) self.assertEqual(len(profile.options), 0) tuned-2.10.0/tests/profiles/test_unit.py000066400000000000000000000022161331721725100202710ustar00rootroot00000000000000import unittest from tuned.profiles import Unit class UnitTestCase(unittest.TestCase): def test_default_options(self): unit = Unit("sample", {}) self.assertEqual(unit.name, "sample") self.assertEqual(unit.type, "sample") self.assertTrue(unit.enabled) self.assertFalse(unit.replace) self.assertDictEqual(unit.options, {}) def test_option_type(self): unit = Unit("sample", {"type": "net"}) self.assertEqual(unit.type, "net") def test_option_enabled(self): unit = Unit("sample", {"enabled": False}) self.assertFalse(unit.enabled) unit.enabled = True self.assertTrue(unit.enabled) def test_option_replace(self): unit = Unit("sample", {"replace": True}) self.assertTrue(unit.replace) def test_option_custom(self): unit = Unit("sample", {"enabled": True, "type": "net", "custom": "value", "foo": "bar"}) self.assertDictEqual(unit.options, {"custom": "value", "foo": "bar"}) unit.options = {"hello": "world"} self.assertDictEqual(unit.options, {"hello": "world"}) def test_parsing_options(self): unit = Unit("sample", {"type": "net", "enabled": True, "replace": True, "other": "foo"}) self.assertEqual(unit.type, "net") tuned-2.10.0/tests/storage/000077500000000000000000000000001331721725100155215ustar00rootroot00000000000000tuned-2.10.0/tests/storage/__init__.py000066400000000000000000000000001331721725100176200ustar00rootroot00000000000000tuned-2.10.0/tests/storage/test_factory.py000066400000000000000000000011621331721725100206010ustar00rootroot00000000000000import unittest from flexmock import flexmock import tuned.storage class StorageFactoryTestCase(unittest.TestCase): def test_create(self): mock_provider = flexmock() factory = tuned.storage.Factory(mock_provider) self.assertEqual(mock_provider, factory.provider) def test_create_storage(self): mock_provider = flexmock() factory = tuned.storage.Factory(mock_provider) storage_foo = factory.create("foo") storage_bar = factory.create("bar") self.assertIsInstance(storage_foo, tuned.storage.Storage) self.assertIsInstance(storage_bar, tuned.storage.Storage) self.assertIsNot(storage_foo, storage_bar) tuned-2.10.0/tests/storage/test_pickle_provider.py000066400000000000000000000037711331721725100223230ustar00rootroot00000000000000import unittest import os.path import tempfile import tuned.storage class StoragePickleProviderTestCase(unittest.TestCase): def setUp(self): (handle, filename) = tempfile.mkstemp() self._temp_filename = filename def tearDown(self): if os.path.exists(self._temp_filename): os.unlink(self._temp_filename) def test_default_path(self): provider = tuned.storage.PickleProvider(self._temp_filename) self.assertEqual(self._temp_filename, provider._path) provider = tuned.storage.PickleProvider() self.assertEqual("/run/tuned/save.pickle", provider._path) def test_memory_persistence(self): provider = tuned.storage.PickleProvider(self._temp_filename) self.assertEqual("default", provider.get("ns1", "opt1", "default")) self.assertIsNone(provider.get("ns2", "opt1")) provider.set("ns1", "opt1", "value1") provider.set("ns1", "opt2", "value2") provider.set("ns2", "opt1", "value3") self.assertEqual("value1", provider.get("ns1", "opt1")) self.assertEqual("value2", provider.get("ns1", "opt2")) self.assertEqual("value3", provider.get("ns2", "opt1")) provider.unset("ns1", "opt1") self.assertIsNone(provider.get("ns1", "opt1")) self.assertEqual("value2", provider.get("ns1", "opt2")) provider.clear() self.assertIsNone(provider.get("ns1", "opt1")) self.assertIsNone(provider.get("ns1", "opt2")) self.assertIsNone(provider.get("ns2", "opt1")) def test_file_persistence(self): provider = tuned.storage.PickleProvider(self._temp_filename) provider.load() provider.set("ns1", "opt1", "value1") provider.set("ns2", "opt2", "value2") provider.save() del provider provider = tuned.storage.PickleProvider(self._temp_filename) provider.load() self.assertEqual("value1", provider.get("ns1", "opt1")) self.assertEqual("value2", provider.get("ns2", "opt2")) provider.clear() del provider provider = tuned.storage.PickleProvider(self._temp_filename) provider.load() self.assertIsNone(provider.get("ns1", "opt1")) self.assertIsNone(provider.get("ns2", "opt2")) tuned-2.10.0/tests/storage/test_storage.py000066400000000000000000000017361331721725100206050ustar00rootroot00000000000000import unittest from flexmock import flexmock import tuned.storage class StorageStorageTestCase(unittest.TestCase): def test_set(self): mock_provider = flexmock() factory = tuned.storage.Factory(mock_provider) storage = factory.create("foo") mock_provider.should_receive("set").with_args("foo", "optname", "optval").once storage.set("optname", "optval") def test_get(self): mock_provider = flexmock() factory = tuned.storage.Factory(mock_provider) storage = factory.create("foo") mock_provider.should_receive("get").with_args("foo", "optname", None).and_return(None).once.ordered mock_provider.should_receive("get").with_args("foo", "optname", "defval").and_return("defval").once.ordered mock_provider.should_receive("get").with_args("foo", "existing", None).and_return("somevalue").once.ordered self.assertIsNone(storage.get("optname")) self.assertEqual("defval", storage.get("optname", "defval")) self.assertEqual("somevalue", storage.get("existing")) tuned-2.10.0/tuned-adm.bash000066400000000000000000000007771331721725100154430ustar00rootroot00000000000000# bash completion for tuned-adm _tuned_adm() { local commands="active list off profile recommend verify" local cur prev words cword _init_completion || return if [[ "$cword" -eq 1 ]]; then COMPREPLY=( $(compgen -W "$commands" -- "$cur" ) ) elif [[ "$cword" -eq 2 && "$prev" == "profile" ]]; then COMPREPLY=( $(compgen -W "$(command find /usr/lib/tuned /etc/tuned -mindepth 1 -maxdepth 1 -type d -printf "%f\n")" -- "$cur" ) ) else COMPREPLY=() fi return 0 } && complete -F _tuned_adm tuned-adm tuned-2.10.0/tuned-adm.py000077500000000000000000000115071331721725100151520ustar00rootroot00000000000000#!/usr/bin/python -Es # # tuned: daemon for monitoring and adaptive tuning of system devices # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function import argparse import sys import traceback import tuned.admin import tuned.consts as consts import tuned.version as ver from tuned.utils.global_config import GlobalConfig def check_positive(value): try: val = int(value) except ValueError: val = -1 if val <= 0: raise argparse.ArgumentTypeError("%s has to be >= 0" % value) return val def check_log_level(value): try: return consts.CAPTURE_LOG_LEVELS[value.lower()] except KeyError: levels = ", ".join(consts.CAPTURE_LOG_LEVELS.keys()) raise argparse.ArgumentTypeError( "Invalid log level: %s. Valid log levels: %s." % (value, levels)) if __name__ == "__main__": config = GlobalConfig() parser = argparse.ArgumentParser(description="Manage tuned daemon.") parser.add_argument('--version', "-v", action = "version", version = "%%(prog)s %s.%s.%s" % (ver.TUNED_VERSION_MAJOR, ver.TUNED_VERSION_MINOR, ver.TUNED_VERSION_PATCH)) parser.add_argument("--debug", "-d", action="store_true", help="show debug messages") parser.add_argument("--async", "-a", action="store_true", help="with dbus do not wait on commands completion and return immediately") parser.add_argument("--timeout", "-t", default = consts.ADMIN_TIMEOUT, type = check_positive, help="with sync operation use specific timeout instead of the default %d second(s)" % consts.ADMIN_TIMEOUT) levels = ", ".join(consts.CAPTURE_LOG_LEVELS.keys()) help = "level of log messages to capture (one of %s). Default: %s" \ % (levels, consts.CAPTURE_LOG_LEVEL) parser.add_argument("--loglevel", "-l", default = consts.CAPTURE_LOG_LEVEL, type = check_log_level, help = help) subparsers = parser.add_subparsers() parser_list = subparsers.add_parser("list", help="list available profiles") parser_list.set_defaults(action="list") parser_active = subparsers.add_parser("active", help="show active profile") parser_active.set_defaults(action="active") parser_off = subparsers.add_parser("off", help="switch off all tunings") parser_off.set_defaults(action="off") parser_profile = subparsers.add_parser("profile", help="switch to a given profile, or list available profiles if no profile is given") parser_profile.set_defaults(action="profile") parser_profile.add_argument("profiles", metavar="profile", type=str, nargs="*", help="profile name") parser_profile_info = subparsers.add_parser("profile_info", help="show information/description of given profile or current profile if no profile is specified") parser_profile_info.set_defaults(action="profile_info") parser_profile_info.add_argument("profile", metavar="profile", type=str, nargs="?", default="", help="profile name, current profile if not specified") if config.get(consts.CFG_RECOMMEND_COMMAND, consts.CFG_DEF_RECOMMEND_COMMAND): parser_off = subparsers.add_parser("recommend", help="recommend profile") parser_off.set_defaults(action="recommend_profile") parser_verify = subparsers.add_parser("verify", help="verify profile") parser_verify.set_defaults(action="verify_profile") parser_verify.add_argument("--ignore-missing", "-i", action="store_true", help="do not treat missing/non-supported tunings as errors") parser_auto_profile = subparsers.add_parser("auto_profile", help="enable automatic profile selection mode, switch to the recommended profile") parser_auto_profile.set_defaults(action="auto_profile") parser_profile_mode = subparsers.add_parser("profile_mode", help="show current profile selection mode") parser_profile_mode.set_defaults(action="profile_mode") args = parser.parse_args(sys.argv[1:]) options = vars(args) debug = options.pop("debug") async = options.pop("async") timeout = options.pop("timeout") action_name = options.pop("action") log_level = options.pop("loglevel") result = False dbus = config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) try: admin = tuned.admin.Admin(dbus, debug, async, timeout, log_level) result = admin.action(action_name, **options) except: traceback.print_exc() sys.exit(3) if result == False: sys.exit(1) else: sys.exit(0) tuned-2.10.0/tuned-gui.desktop000066400000000000000000000003701331721725100162070ustar00rootroot00000000000000[Desktop Entry] Name=tuned-gui GenericName=tuned Comment=GTK GUI that can control Tuned daemon and provides simple profile editor Exec=/usr/sbin/tuned-gui Icon=tuned Terminal=false Type=Application Categories=Settings;HardwareSettings; Version=1.0 tuned-2.10.0/tuned-gui.glade000066400000000000000000002513261331721725100156230ustar00rootroot00000000000000 False dialog Tuned Manager False vertical 2 False end False True end 0 False dialog False vertical 2 False end gtk-apply True True True True False True 0 gtk-cancel True True True True False True 1 False True end 0 True False vertical True False 20 20 True True False True 0 300 True True 30 30 True True False True 1 False True 1 buttonApplyChangeValue buttonCancel1 True False addPluginValue True False Add Value True addCustomPluginValue True False Add Custom Value True True False deletePluginValue True False Delete Value True False dialog False vertical 2 False end gtk-add True True True True 0.40999999642372131 0.60000002384185791 bottom False True 0 gtk-cancel True True True True False True 1 False True end 0 True False vertical True False Choose plugin value to add: False True 0 True True Value ... False True 1 False True 1 button3 button4 False dialog False vertical 2 False end gtk-ok True True True True bottom False True 0 gtk-cancel True True True True False True 1 False True end 0 True False vertical True False 30 10 Plugin: False True 0 True False False True 1 False True 1 buttonAddPluginDialog buttonCloseAddPlugin False dialog False vertical 2 False end gtk-add True True True True 0.40999999642372131 0.60000002384185791 bottom False True 0 gtk-cancel True True True True False True 1 False True end 0 True False vertical True False Choose plugin value to add: False True 0 True False False True 1 False True 1 button1 button2 False Tuned Manager True False vertical True False True False _File True True False gtk-quit True False True True True False _Help True True False gtk-about True False True True False True 0 True True False True True False immediate vertical True False vertical True False start 150 True False end True 0 Tuned On Startup end 0.029999999999999999 0 1 1 1 True True start center 5 5 True 1 0 1 1 True True start center 5 5 True 1 1 1 1 150 True False end True 0 Start Tuned Daemon end 0 0 1 1 150 True False end True Admin functions 0 2 1 1 True True start center True 1 2 1 1 False False 5 0 True False False True 1 True False center start True False label 0.089999999999999997 1 0 1 1 True False 20 20 Active profile: 0 0 1 1 True False 15 Included profile: 0 1 1 1 True False 15 label 1 1 1 1 False True 2 True False True False False True 3 False True 0 True False False True 1 False True False Summary fill 0.01 False True False 5 5 5 5 vertical True False 20 20 Choose profile to manage: False True 0 True False center baseline 20 20 Create New Profile 200 True True True 120 0 0 1 1 Update 200 True True True 10 queue 1 0 1 1 Delete 200 True True True center center right 2 0 1 1 False True end 2 True True in True True True both True True False True 3 1 True False Profiles True 1 False True False vertical True False False True 0 True False vertical True False 30 10 Default values and options to set False True 0 100 True True False True 1 True False 30 10 Documentation False True 2 250 True True False True 3 False True 1 2 True False Plugins 2 False True True 1 True False vertical False True 2 True False True True False end end 30 5 5 Actual Profile: fill 0 0 1 1 True False end end 30 5 5 True Recomended Profile: 3 0 1 1 True False start end 40 5 5 label 75 1 0 1 1 True False start end 40 5 5 label True 4 0 1 1 True False end end 5 5 True DBUS Status: 6 0 1 1 True False start end 10 5 5 label 7 0 1 1 True False vertical 2 0 1 1 True False vertical 5 0 1 1 False False end 3 True False False True 4 True False 2 2 Change Profile True True True 2 0 1 1 True False True 1 0 1 1 True False 10 Profile: 0.089999999999999997 0 0 1 1 20 20 True False 5 5 False False 3 0 1 1 False True 5 True False False True 6 False dialog False vertical 2 False end False True end 0 False dialog error close False vertical 2 False end False True end 0 True True treestoreActualPlugins False 5 dialog False vertical 2 False end Turn On True True True False True 0 Exit True True True False True 1 False True end 0 True False 50 Daemon TUned Is not running. False True 1 turn_daemon_on_button cancel_button False Profile Editor True True False True False 20 20 Create Profile start 0 0 1 1 True False True False vertical Add Plugin 160 True True True 20 20 12 13 True True 1 Remove Plugin 160 True True True 20 20 True True 2 160 0 True False 150 150 vertical True False False True 0 Raw Editor 160 True True True center 10 10 True True 1 True False False True 2 False True 3 Confirm 160 True True True 20 20 20 20 True True 4 Confirm True True True 20 20 20 20 True True 5 gtk-cancel 160 True True True 20 20 20 True True True 6 1 0 1 1 500 450 True True 10 10 10 10 True True True True True in 0 0 1 1 0 4 1 1 True False 10 10 0 1 1 1 True False True False 20 Profile Name: 0 0 1 1 260 True True 1 0 1 1 150 True False end True 2 0 1 1 Include True True True start end 10 10 True 3 0 1 1 0 2 1 1 True False 10 10 0 3 1 1 False True False vertical True False 20 20 Profile Config: False True 0 True True in True True 5 5 5 5 False True 1 True False end gtk-apply True True True 10 10 10 10 True 0 0 1 1 gtk-close True True True 10 10 10 10 True 1 0 1 1 False True end 2 tuned-2.10.0/tuned-gui.py000077500000000000000000001031371331721725100151760ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright (C) 2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Oct 15, 2013 @author: mstana ''' from __future__ import print_function try: import gi except ImportError: raise ImportError("Gtk3 backend requires pygobject to be installed.") try: gi.require_version("Gtk", "3.0") except AttributeError: raise ImportError( "pygobject version too old -- it must have require_version") except ValueError: raise ImportError( "Gtk3 backend requires the GObject introspection bindings for Gtk 3 " "to be installed.") try: from gi.repository import Gtk, GObject except ImportError: raise ImportError("Gtk3 backend requires pygobject to be installed.") import sys import os import time import configobj import subprocess import tuned.logs import tuned.consts as consts import tuned.version as version import tuned.admin.dbus_controller import tuned.gtk.gui_profile_loader import tuned.gtk.gui_plugin_loader import tuned.profiles.profile as profile import tuned.utils.global_config from tuned.utils.commands import commands from tuned.gtk.managerException import ManagerException EXECNAME = '/usr/sbin/tuned-gui' GLADEUI = '/usr/share/tuned/ui/tuned-gui.glade' LICENSE = 'GNU GPL version 2 or later ' NAME = 'tuned' VERSION = 'tuned ' + str(version.TUNED_VERSION_MAJOR) + '.' + \ str(version.TUNED_VERSION_MINOR) + '.' + str(version.TUNED_VERSION_PATCH) COPYRIGHT = 'Copyright (C) 2014 Red Hat, Inc.' AUTHORS = [ '', 'Marek Staňa', 'Jaroslav Škarvada ', ] debug = False class Base(object): """ GUI class for program Tuned. """ is_admin = False def _starting(self): try: self.controller = \ tuned.admin.DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT) self.controller.is_running() except tuned.admin.exceptions.TunedAdminDBusException as ex: response = self.tuned_daemon_exception_dialog.run() if response == 0: # button Turn ON pressed # switch_tuned_start_stop notify the switch which call funcion start_tuned self._start_tuned() self.tuned_daemon_exception_dialog.hide() return True elif response == 1: self.error_dialog('Tuned is shutting down.', 'Reason: missing communication with Tuned daemon.' ) return False return True def __init__(self): self.active_profile = None self._cmd = commands(debug) self.config = tuned.utils.global_config.GlobalConfig() self.builder = Gtk.Builder() try: self.builder.add_from_file(GLADEUI) except GObject.GError as e: print("Error loading '%s'" % GLADEUI, file=sys.stderr) sys.exit(1) # # DIALOGS # self.messagedialog_operation_error = \ self.builder.get_object('messagedialogOperationError') self.tuned_daemon_exception_dialog = \ self.builder.get_object('tunedDaemonExceptionDialog') self.dialog_add_plugin = \ self.builder.get_object('dialogAddPlugin') self.tuned_daemon_exception_dialog.connect('destroy', lambda d: \ self.tuned_daemon_exception_dialog.hide()) self.cancel_button = self.builder.get_object('cancel_button') self.cancel_button.connect('clicked', lambda d: \ self.tuned_daemon_exception_dialog.hide()) if not self._starting(): return self.manager = \ tuned.gtk.gui_profile_loader.GuiProfileLoader(tuned.consts.LOAD_DIRECTORIES) self.manager = \ tuned.gtk.gui_profile_loader.GuiProfileLoader(tuned.consts.LOAD_DIRECTORIES) self.plugin_loader = \ tuned.gtk.gui_plugin_loader.GuiPluginLoader() action_group = Gtk.ActionGroup('my_actions') self.builder.connect_signals(self) self.builder.connect_signals(self) # # WINDOW MAIN # self.main_window = self.builder.get_object('mainWindow') # # WINDOW PROFILE EDITOR # self.window_profile_editor = \ self.builder.get_object('windowProfileEditor') self.window_profile_editor.connect('delete-event', self.on_delete_event) self.entry_profile_name = \ self.builder.get_object('entryProfileName') self.combobox_include_profile = \ self.builder.get_object('comboboxIncludeProfile') self.togglebutton_include_profile = \ self.builder.get_object('togglebuttonIncludeProfile') self.notebook_plugins = \ self.builder.get_object('notebookPlugins') self.button_add_plugin = \ self.builder.get_object('buttonAddPlugin') self.button_remove_plugin = \ self.builder.get_object('buttonRemovePlugin') self.button_open_raw = self.builder.get_object('buttonOpenRaw') self.button_cancel = self.builder.get_object('buttonCancel') self.button_open_raw.connect('clicked', self.execute_open_raw_button) self.button_add_plugin.connect('clicked', self.execute_add_plugin_to_notebook) self.button_remove_plugin.connect('clicked', self.execute_remove_plugin_from_notebook) self.button_cancel.connect('clicked', self.execute_cancel_window_profile_editor) # # WINDOW PROFILE EDITOR RAW # self.window_profile_editor_raw = \ self.builder.get_object('windowProfileEditorRaw') self.window_profile_editor_raw.connect('delete-event', self.on_delete_event) self.button_apply = self.builder.get_object('buttonApply') self.button_apply.connect('clicked', self.execute_apply_window_profile_editor_raw) self.button_cancel_raw = \ self.builder.get_object('buttonCancelRaw') self.button_cancel_raw.connect('clicked', self.execute_cancel_window_profile_editor_raw) self.textview_profile_config_raw = \ self.builder.get_object('textviewProfileConfigRaw') self.textview_profile_config_raw.set_editable(True) self.textview_plugin_avaible_text = \ self.builder.get_object('textviewPluginAvaibleText') self.textview_plugin_documentation_text = \ self.builder.get_object('textviewPluginDocumentationText') self.textview_plugin_avaible_text.set_editable(False) self.textview_plugin_documentation_text.set_editable(False) # # DIALOG ABOUT # self.about_dialog = Gtk.AboutDialog.new() self.about_dialog.set_name(NAME) self.about_dialog.set_version(VERSION) self.about_dialog.set_license(LICENSE) self.about_dialog.set_wrap_license(True) self.about_dialog.set_copyright(COPYRIGHT) self.about_dialog.set_authors(AUTHORS) # # GET WIDGETS # self.imagemenuitem_quit = \ self.builder.get_object('imagemenuitemQuit') self.imagemenuitem_about = \ self.builder.get_object('imagemenuitemAbout') self.label_actual_profile = \ self.builder.get_object('labelActualProfile') self.label_recommended_profile = \ self.builder.get_object('label_recommemnded_profile') self.label_dbus_status = \ self.builder.get_object('labelDbusStatus') self.label_summary_profile = \ self.builder.get_object('summaryProfileName') self.label_summary_included_profile = \ self.builder.get_object('summaryIncludedProfileName') self.comboboxtext_fast_change_profile = \ self.builder.get_object('comboboxtextFastChangeProfile') self.button_fast_change_profile = \ self.builder.get_object('buttonFastChangeProfile') self.spinner_fast_change_profile = \ self.builder.get_object('spinnerFastChangeProfile') self.spinner_fast_change_profile.hide() self.switch_tuned_start_stop = \ self.builder.get_object('switchTunedStartStop') self.switch_tuned_startup_start_stop = \ self.builder.get_object('switchTunedStartupStartStop') self.switch_tuned_admin_functions = \ self.builder.get_object('switchTunedAdminFunctions') self.treeview_profile_manager = \ self.builder.get_object('treeviewProfileManager') self.treeview_actual_plugins = \ self.builder.get_object('treeviewActualPlugins') # # SET WIDGETS # self.treestore_profiles = Gtk.ListStore(GObject.TYPE_STRING, GObject.TYPE_STRING) self.treestore_plugins = Gtk.ListStore(GObject.TYPE_STRING) for plugin in sorted(self.plugin_loader.plugins): self.treestore_plugins.append([plugin.name]) self.combobox_plugins = \ self.builder.get_object('comboboxPlugins') self.combobox_plugins.set_model(self.treestore_plugins) self.combobox_main_plugins = \ self.builder.get_object('comboboxMainPlugins') self.combobox_main_plugins.set_model(self.treestore_plugins) self.combobox_main_plugins.connect('changed', self.on_changed_combobox_plugins) self.combobox_include_profile.set_model(self.treestore_profiles) cell = Gtk.CellRendererText() self.combobox_include_profile.pack_start(cell, True) self.combobox_include_profile.add_attribute(cell, 'text', 0) self.treeview_profile_manager.append_column(Gtk.TreeViewColumn('Type' , Gtk.CellRendererText(), text=1)) self.treeview_profile_manager.append_column(Gtk.TreeViewColumn('Name' , Gtk.CellRendererText(), text=0)) self.treeview_profile_manager.set_model(self.treestore_profiles) for profile_name in self.manager.get_names(): if self.manager.is_profile_factory(profile_name): self.treestore_profiles.append([profile_name, consts.PREFIX_PROFILE_FACTORY]) else: self.treestore_profiles.append([profile_name, consts.PREFIX_PROFILE_USER]) self.treeview_profile_manager.get_selection().select_path(0) self.button_create_profile = \ self.builder.get_object('buttonCreateProfile') self.button_upadte_selected_profile = \ self.builder.get_object('buttonUpadteSelectedProfile') self.button_delete_selected_profile = \ self.builder.get_object('buttonDeleteSelectedProfile') self.label_actual_profile.set_text(self.controller.active_profile()) if self.config.get(consts.CFG_RECOMMEND_COMMAND): self.label_recommended_profile.set_text(self.controller.recommend_profile()) self.listbox_summary_of_active_profile = \ self.builder.get_object('listboxSummaryOfActiveProfile') self.data_for_listbox_summary_of_active_profile() self.comboboxtext_fast_change_profile.set_model(self.treestore_profiles) self.label_dbus_status.set_text(str(bool(self.controller.is_running()))) self.switch_tuned_start_stop.set_active(True) self.switch_tuned_startup_start_stop.set_active(self.service_run_on_start_up('tuned' )) self.switch_tuned_admin_functions.set_active(self.is_admin) self.menu_add_plugin_value = \ self.builder.get_object('menuAddPluginValue') self.add_plugin_value_action = \ self.builder.get_object('addPluginValue') self.add_custom_plugin_value = \ self.builder.get_object('addCustomPluginValue') self.delete_plugin_value_action = \ self.builder.get_object('deletePluginValue') self.add_plugin_value_action.connect('activate', self.add_plugin_value_to_treeview) self.add_custom_plugin_value.connect('activate', self.add_custom_plugin_value_to_treeview) self.delete_plugin_value_action.connect('activate', self.delete_plugin_value_to_treeview) # # CONNECTIONS # self.imagemenuitem_quit.connect('activate', Gtk.main_quit) self.imagemenuitem_about.connect('activate', self.execute_about) self.comboboxtext_fast_change_profile.set_active(self.get_iter_from_model_by_name(self.comboboxtext_fast_change_profile.get_model(), self.controller.active_profile())) self.button_fast_change_profile.connect('clicked', self.execute_change_profile) self.switch_tuned_start_stop.connect('notify::active', self.execute_switch_tuned) self.switch_tuned_startup_start_stop.connect('notify::active', self.execute_switch_tuned) self.switch_tuned_admin_functions.connect('notify::active', self.execute_switch_tuned_admin_functions) self.button_create_profile.connect('clicked', self.execute_create_profile) self.button_upadte_selected_profile.connect('clicked', self.execute_update_profile) self.button_delete_selected_profile.connect('clicked', self.execute_remove_profile) self.button_confirm_profile_create = \ self.builder.get_object('buttonConfirmProfileCreate') self.button_confirm_profile_update = \ self.builder.get_object('buttonConfirmProfileUpdate') self.button_confirm_profile_create.connect('clicked', self.on_click_button_confirm_profile_create) self.button_confirm_profile_update.connect('clicked', self.on_click_button_confirm_profile_update) self.editing_profile_name = None self.treeview_actual_plugins.connect('row-activated', self.on_treeview_click) # self.treeview_profile_manager.connect('row-activated',lambda x,y,z: self.execute_update_profile(x,y)) # TO DO: need to be fixed! - double click on treeview self.main_window.connect('destroy', Gtk.main_quit) self.main_window.show() Gtk.main() def get_iter_from_model_by_name(self, model, item_name): ''' Return iter from model selected by name of item in this model ''' model = self.combobox_include_profile.get_model() selected = 0 for item in model: try: if item[0] == item_name: selected = int(item.path.to_string()) except KeyError: pass return selected def is_tuned_connection_ok(self): """ Result True, False depends on if tuned daemon is running. If its not runing this method try to start tuned. """ try: self.controller.is_running() return True except tuned.admin.exceptions.TunedAdminDBusException: response = self.tuned_daemon_exception_dialog.run() if response == 0: # button Turn ON pressed # switch_tuned_start_stop notify the switch which call funcion start_tuned try: self._start_tuned() self.tuned_daemon_exception_dialog.hide() self.switch_tuned_start_stop.set_active(True) return True except: self.tuned_daemon_exception_dialog.hide() return False else: self.tuned_daemon_exception_dialog.hide() return False def data_for_listbox_summary_of_active_profile(self): """ This add rows to object listbox_summary_of_active_profile. Row consist of grid. Inside grid on first possition is label, second possition is vertical grid. label = name of plugin verical grid consist of labels where are stored values for plugin option and value. This method is emited after change profile and on startup of app. """ for row in self.listbox_summary_of_active_profile: self.listbox_summary_of_active_profile.remove(row) if self.is_tuned_connection_ok(): self.active_profile = \ self.manager.get_profile(self.controller.active_profile()) else: self.active_profile = None self.label_summary_profile.set_text(self.active_profile.name) try: self.label_summary_included_profile.set_text(self.active_profile.options['include' ]) except: # keyerror probably self.label_summary_included_profile.set_text('None') row = Gtk.ListBoxRow() box = Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL, spacing=0) plugin_name = Gtk.Label() plugin_name.set_markup('Plugin Name') plugin_option = Gtk.Label() plugin_option.set_markup('Plugin Options') box.pack_start(plugin_name, True, True, 0) box.pack_start(plugin_option, True, True, 0) row.add(box) self.listbox_summary_of_active_profile.add(row) sep = Gtk.Separator.new(Gtk.Orientation.HORIZONTAL) self.listbox_summary_of_active_profile.add(sep) sep.show() for u in self.active_profile.units: row = Gtk.ListBoxRow() hbox = Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL, spacing=0) hbox.set_homogeneous(True) row.add(hbox) label = Gtk.Label() label.set_markup(u) label.set_justify(Gtk.Justification.LEFT) hbox.pack_start(label, False, True, 1) grid = Gtk.Box(orientation=Gtk.Orientation.VERTICAL, spacing=0) grid.set_homogeneous(True) for o in self.active_profile.units[u].options: label_option = Gtk.Label() label_option.set_markup(o + ' = ' + '' + self.active_profile.units[u].options[o] + '') grid.pack_start(label_option, False, True, 0) hbox.pack_start(grid, False, True, 0) self.listbox_summary_of_active_profile.add(row) separator = Gtk.Separator.new(Gtk.Orientation.HORIZONTAL) self.listbox_summary_of_active_profile.add(separator) separator.show() self.listbox_summary_of_active_profile.show_all() # def on_treeview_button_press_event(self, treeview, event): # popup = Gtk.Menu() # popup.append(Gtk.MenuItem('add')) # popup.append(Gtk.MenuItem('delete')) # # if event.button == 3: # x = int(event.x) # y = int(event.y) # time = event.time # pthinfo = treeview.get_path_at_pos(x, y) # if pthinfo is not None: # path, col, cellx, celly = pthinfo # treeview.grab_focus() # treeview.set_cursor(path, col, 0) # popup.popup(None, None, lambda menu, data: # (event.get_root_coords()[0], event.get_root_coords()[1], True), None, event.button, event.time) # return True def on_changed_combobox_plugins(self, combo): plugin = \ self.plugin_loader.get_plugin(self.combobox_main_plugins.get_active_text()) if plugin is None: self.textview_plugin_avaible_text.get_buffer().set_text('') self.textview_plugin_documentation_text.get_buffer().set_text('' ) return options = '\n'.join('%s = %r' % (key, val) for (key, val) in plugin._get_config_options().items()) self.textview_plugin_avaible_text.get_buffer().set_text(options) self.textview_plugin_documentation_text.get_buffer().set_text(plugin.__doc__) def on_delete_event(self, window, data): window.hide() return True def _get_active_profile_name(self): return self.manager.get_profile(self.controller.active_profile()).name def execute_remove_profile(self, button): profile = self.get_treeview_selected() try: if self._get_active_profile_name() == profile: self.error_dialog('You can not remove active profile', 'Please deactivate profile by choosind another!' ) return if profile is None: self.error_dialog('No profile selected!', '') return if self.window_profile_editor.is_active(): self.error_dialog('You are ediding ' + self.editing_profile_name + ' profile.', 'Please close edit window and try again.' ) return self.manager.remove_profile(profile, is_admin=self.is_admin) for item in self.treestore_profiles: if item[0] == profile: iter = self.treestore_profiles.get_iter(item.path) self.treestore_profiles.remove(iter) except ManagerException as ex: self.error_dialog('Profile can not be remove', ex.__str__()) def execute_cancel_window_profile_editor(self, button): self.window_profile_editor.hide() def execute_cancel_window_profile_editor_raw(self, button): self.window_profile_editor_raw.hide() def execute_open_raw_button(self, button): profile_name = self.get_treeview_selected() text_buffer = self.textview_profile_config_raw.get_buffer() text_buffer.set_text(self.manager.get_raw_profile(profile_name)) self.window_profile_editor_raw.show_all() def execute_add_plugin_to_notebook(self, button): if self.choose_plugin_dialog() == 1: plugin_name = self.combobox_plugins.get_active_text() plugin_to_tab = None for plugin in self.plugin_loader.plugins: if plugin.name == plugin_name: for children in self.notebook_plugins: if plugin_name \ == self.notebook_plugins.get_menu_label_text(children): self.error_dialog('Plugin ' + plugin_name + ' is already in profile.', '') return plugin_to_tab = plugin self.notebook_plugins.append_page_menu(self.treeview_for_data(plugin_to_tab._get_config_options()), Gtk.Label(plugin_to_tab.name), Gtk.Label(plugin_to_tab.name)) self.notebook_plugins.show_all() def execute_remove_plugin_from_notebook(self, data): treestore = Gtk.ListStore(GObject.TYPE_STRING) for children in self.notebook_plugins.get_children(): treestore.append([self.notebook_plugins.get_menu_label_text(children)]) self.combobox_plugins.set_model(treestore) response_of_dialog = self.choose_plugin_dialog() if response_of_dialog == 1: # ok button pressed selected = self.combobox_plugins.get_active_text() for children in self.notebook_plugins.get_children(): if self.notebook_plugins.get_menu_label_text(children) \ == selected: self.notebook_plugins.remove(children) self.combobox_plugins.set_model(self.treestore_plugins) def execute_apply_window_profile_editor_raw(self, data): text_buffer = self.textview_profile_config_raw.get_buffer() start = text_buffer.get_start_iter() end = text_buffer.get_end_iter() profile_name = self.get_treeview_selected() self.manager.set_raw_profile(profile_name, text_buffer.get_text(start, end, True)) self.error_dialog('Profile Editor will be closed.', 'for next updates reopen profile.') self.window_profile_editor.hide() self.window_profile_editor_raw.hide() # refresh window_profile_editor def execute_create_profile(self, button): self.reset_values_window_edit_profile() self.button_confirm_profile_create.show() self.button_confirm_profile_update.hide() self.button_open_raw.hide() for child in self.notebook_plugins.get_children(): self.notebook_plugins.remove(child) self.window_profile_editor.show() def reset_values_window_edit_profile(self): self.entry_profile_name.set_text('') self.combobox_include_profile.set_active(0) for child in self.notebook_plugins.get_children(): self.notebook_plugins.remove(child) def get_treeview_selected(self): """ Return value of treeview which is selected at calling moment of this function. """ selection = self.treeview_profile_manager.get_selection() (model, iter) = selection.get_selected() if iter is None: self.error_dialog('No profile selected', '') return self.treestore_profiles.get_value(iter, 0) def on_click_button_confirm_profile_update(self, data): profile_name = self.get_treeview_selected() prof = self.data_to_profile_config() for item in self.treestore_profiles: try: if item[0] == profile_name: iter = self.treestore_profiles.get_iter(item.path) self.treestore_profiles.remove(iter) except KeyError: raise KeyError('this cant happen') self.manager.update_profile(profile_name, prof, self.is_admin) if self.manager.is_profile_factory(prof.name): prefix = consts.PREFIX_PROFILE_FACTORY else: prefix = consts.PREFIX_PROFILE_USER self.treestore_profiles.append([prof.name, prefix]) self.window_profile_editor.hide() def data_to_profile_config(self): name = self.entry_profile_name.get_text() config = configobj.ConfigObj(list_values = False, interpolation = False) activated = self.combobox_include_profile.get_active() model = self.combobox_include_profile.get_model() include = model[activated][0] if self.togglebutton_include_profile.get_active(): config['main'] = {'include': include} for children in self.notebook_plugins: acumulate_options = {} for item in children.get_model(): if item[0] != 'None': acumulate_options[item[1]] = item[0] config[self.notebook_plugins.get_menu_label_text(children)] = \ acumulate_options return profile.Profile(name, config) def on_click_button_confirm_profile_create(self, data): # try: prof = self.data_to_profile_config() self.manager.save_profile(prof) self.manager._load_all_profiles() self.treestore_profiles.append([prof.name, consts.PREFIX_PROFILE_USER]) self.window_profile_editor.hide() # except ManagerException: # self.error_dialog("Profile with name " + prof.name # + " already exist.", "Please choose another name for profile") def execute_update_profile(self, data): # if (self.treeview_profile_manager.get_activate_on_single_click()): # print "returning" # print self.treeview_profile_manager.get_activate_on_single_click() # return self.button_confirm_profile_create.hide() self.button_confirm_profile_update.show() self.button_open_raw.show() label_update_profile = \ self.builder.get_object('labelUpdateProfile') label_update_profile.set_text('Update Profile') for child in self.notebook_plugins.get_children(): self.notebook_plugins.remove(child) self.editing_profile_name = self.get_treeview_selected() if self.editing_profile_name is None: self.error_dialog('No profile Selected', 'To update profile please select profile.' ) return if self._get_active_profile_name() == self.editing_profile_name: self.error_dialog('You can not update active profile', 'Please deactivate profile by choosing another!' ) return if self.manager.is_profile_removable(self.editing_profile_name) \ or self.is_admin: profile = \ self.manager.get_profile(self.editing_profile_name) self.entry_profile_name.set_text(profile.name) model = self.combobox_include_profile.get_model() selected = 0 self.togglebutton_include_profile.set_active(False) for item in model: try: if item[0] == profile.options['include']: selected = int(item.path.to_string()) self.togglebutton_include_profile.set_active(True) except KeyError: pass # profile dont have include section self.combobox_include_profile.set_active(selected) # load all values not just normal for (name, unit) in list(profile.units.items()): self.notebook_plugins.append_page_menu(self.treeview_for_data(unit.options), Gtk.Label(unit.name), Gtk.Label(unit.name)) self.notebook_plugins.show_all() self.window_profile_editor.show() else: self.error_dialog('You can not update Factory profile', '') def treeview_for_data(self, data): """ This prepare treestore and treeview for data and return treeview """ treestore = Gtk.ListStore(GObject.TYPE_STRING, GObject.TYPE_STRING) for (option, value) in list(data.items()): treestore.append([str(value), option]) treeview = Gtk.TreeView(treestore) renderer = Gtk.CellRendererText() column_option = Gtk.TreeViewColumn('Option', renderer, text=0) column_value = Gtk.TreeViewColumn('Value', renderer, text=1) treeview.append_column(column_value) treeview.append_column(column_option) treeview.enable_grid_lines = True treeview.connect('row-activated', self.change_value_dialog) treeview.connect('button_press_event', self.on_treeview_click) return treeview def execute_change_profile(self, button): """ Change profile in main window. """ self.spinner_fast_change_profile.show() self.spinner_fast_change_profile.start() if button is not None: text = \ self.comboboxtext_fast_change_profile.get_active_text() if text is not None: if self.is_tuned_connection_ok(): self.controller.switch_profile(text) self.label_actual_profile.set_text(self.controller.active_profile()) self.data_for_listbox_summary_of_active_profile() self.active_profile = \ self.manager.get_profile(self.controller.active_profile()) else: self.label_actual_profile.set_text('') else: self.error_dialog('No profile selected', '') self.spinner_fast_change_profile.stop() self.spinner_fast_change_profile.hide() def execute_switch_tuned(self, switch, data): """ Suported switch_tuned_start_stop and switch_tuned_startup_start_stop. """ if switch == self.switch_tuned_start_stop: # starts or stop tuned daemon if self.switch_tuned_start_stop.get_active(): self.is_tuned_connection_ok() else: self._cmd.execute(['service', 'tuned', 'stop']) self.error_dialog('Tuned Daemon is turned off', 'Support of tuned is not running.') elif switch == self.switch_tuned_startup_start_stop: # switch option for start tuned on start up if self.switch_tuned_startup_start_stop.get_active(): self._cmd.execute(['systemctl', 'enable', 'tuned']) else: self._cmd.execute(['systemctl', 'disable', 'tuned']) else: raise NotImplementedError() def execute_switch_tuned_admin_functions(self, switch, data): self.is_admin = self.switch_tuned_admin_functions.get_active() def service_run_on_start_up(self, service): """ Depends on if tuned is set to run on startup of system return true if yes, else return false """ (temp, out) = self._cmd.execute(['systemctl', 'is-enabled', service]) if temp == 0: return True return False def error_dialog(self, error, info): """ General error dialog with two fields. Primary and secondary text fields. """ self.messagedialog_operation_error.set_markup(error) self.messagedialog_operation_error.format_secondary_text(info) self.messagedialog_operation_error.run() self.messagedialog_operation_error.hide() def execute_about(self, widget): self.about_dialog.run() self.about_dialog.hide() def change_value_dialog( self, tree_view, path, treeview_column, ): """ Shows up dialog after double click on treeview which has to be stored in notebook of plugins. Th``` dialog allows you to chagne specific option's value in plugin. """ model = tree_view.get_model() dialog = self.builder.get_object('changeValueDialog') button_apply = self.builder.get_object('buttonApplyChangeValue') button_cancel = self.builder.get_object('buttonCancel1') entry1 = self.builder.get_object('entry1') text = self.builder.get_object('labelTextDialogChangeValue') text.set_text(model.get_value(model.get_iter(path), 1)) text = model.get_value(model.get_iter(path), 0) if text is not None: entry1.set_text(text) else: entry1.set_text('') dialog.connect('destroy', lambda d: dialog.hide()) button_cancel.connect('clicked', lambda d: dialog.hide()) if dialog.run() == 1: model.set_value(model.get_iter(path), 0, entry1.get_text()) dialog.hide() def choose_plugin_dialog(self): """ Shows up dialog with combobox where are stored plugins available to add. """ self.combobox_plugins.set_active(0) self.button_add_plugin = \ self.builder.get_object('buttonAddPluginDialog') self.button_cancel_add_plugin_dialog = \ self.builder.get_object('buttonCloseAddPlugin') self.button_cancel_add_plugin_dialog.connect('clicked', lambda d: self.dialog_add_plugin.hide()) self.dialog_add_plugin.connect('destroy', lambda d: \ self.dialog_add_plugin.hide()) response = self.dialog_add_plugin.run() self.dialog_add_plugin.hide() return response def on_treeview_click(self, treeview, event): if event.button == 3: popup = Gtk.Menu() popup.append(Gtk.MenuItem('add')) popup.append(Gtk.MenuItem('delete')) time = event.time self.menu_add_plugin_value.popup( None, None, None, None, event.button, time, ) return True @staticmethod def liststore_contains_item(liststore, item): for liststore_item in liststore: if liststore_item[1] == item: return True return False def add_plugin_value_to_treeview(self, action): current_plugin = \ self.notebook_plugins.get_tab_label(self.notebook_plugins.get_nth_page(self.notebook_plugins.get_current_page())).get_text() current_plugin_options = \ self.plugin_loader.get_plugin(current_plugin)._get_config_options() curent_plugin_values_model = \ self.notebook_plugins.get_nth_page(self.notebook_plugins.get_current_page()).get_model() treestore_plugins_values = Gtk.ListStore(GObject.TYPE_STRING) for vl_name in current_plugin_options: if not self.liststore_contains_item(curent_plugin_values_model, vl_name): treestore_plugins_values.append([vl_name]) dialog_add_plugin_value = \ self.builder.get_object('dialogAddPluginValue') dialog_add_plugin_value.connect('destroy', lambda d: \ dialog_add_plugin_value.hide()) combobox = self.builder.get_object('comboboxPluginsValues') combobox.set_model(treestore_plugins_values) response = dialog_add_plugin_value.run() dialog_add_plugin_value.hide() if response == 1: active = combobox.get_active_text() curent_plugin_values_model.append([current_plugin_options.get(active), active]) return True return False def add_custom_plugin_value_to_treeview(self, action): curent_plugin_values_model = \ self.notebook_plugins.get_nth_page(self.notebook_plugins.get_current_page()).get_model() dialog_add_custom_plugin_value = \ self.builder.get_object('dialogAddCustomPluginValue') text = self.builder.get_object('entry2') dialog_add_custom_plugin_value.connect('destroy', lambda d: \ dialog_add_custom_plugin_value.hide()) response = dialog_add_custom_plugin_value.run() dialog_add_custom_plugin_value.hide() if response == 1: curent_plugin_values_model.append(['', text.get_text()]) return True return False def delete_plugin_value_to_treeview(self, action): curent_plugin_values_tree = \ self.notebook_plugins.get_nth_page(self.notebook_plugins.get_current_page()) (model, iter) = \ curent_plugin_values_tree.get_selection().get_selected() if model is None or iter is None: return False model.remove(iter) return True def _start_tuned(self): self._cmd.execute(['service', 'tuned', 'start']) time.sleep(10) self.controller = tuned.admin.DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT) if __name__ == '__main__': if os.geteuid() != 0: try: # Explicitly disabling shell to be safe ec = subprocess.call(['pkexec', EXECNAME] + sys.argv[1:], shell = False) except (subprocess.CalledProcessError) as e: print('Error elevating privileges: %s' % e, file=sys.stderr) else: # If not pkexec error if ec not in [126, 127]: sys.exit(0) # In case of error elevating privileges print('Superuser permissions are required to run the daemon.', file=sys.stderr) sys.exit(1) base = Base() tuned-2.10.0/tuned-main.conf000066400000000000000000000022551331721725100156270ustar00rootroot00000000000000# Global tuned configuration file. # Whether to use daemon. Without daemon it just applies tuning. It is # not recommended, because many functions don't work without daemon, # e.g. there will be no D-Bus, no rollback of settings, no hotplug, # no dynamic tuning, ... daemon = 1 # Dynamicaly tune devices, if disabled only static tuning will be used. dynamic_tuning = 1 # How long to sleep before checking for events (in seconds) # higher number means lower overhead but longer response time. sleep_interval = 1 # Update interval for dynamic tunings (in seconds). # It must be multiply of the sleep_interval. update_interval = 10 # Recommend functionality, if disabled "recommend" command will be not # available in CLI, daemon will not parse recommend.conf but will return # one hardcoded profile (by default "balanced"). recommend_command = 1 # Whether to reapply sysctl from the e.g /etc/sysctl.conf, /etc/sysctl.d, ... # If enabled these sysctls will be re-appliead after Tuned sysctls are # applied, i.e. Tuned sysctls will not override system sysctls. reapply_sysctl = 1 # Default priority assigned to instances default_instance_priority = 0 # Udev buffer size udev_buffer_size = 1MB tuned-2.10.0/tuned.py000077500000000000000000000061551331721725100144160ustar00rootroot00000000000000#!/usr/bin/python -Es # # tuned: daemon for monitoring and adaptive tuning of system devices # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function import argparse import os import sys import traceback import tuned.logs import tuned.daemon import tuned.exceptions import tuned.consts as consts import tuned.version as ver from tuned.utils.global_config import GlobalConfig def error(message): print(message, file=sys.stderr) if __name__ == "__main__": parser = argparse.ArgumentParser(description = "Daemon for monitoring and adaptive tuning of system devices.") parser.add_argument("--daemon", "-d", action = "store_true", help = "run on background") parser.add_argument("--debug", "-D", action = "store_true", help = "show/log debugging messages") parser.add_argument("--log", "-l", nargs = "?", const = consts.LOG_FILE, help = "log to file, default file: " + consts.LOG_FILE) parser.add_argument("--pid", "-P", nargs = "?", const = consts.PID_FILE, help = "write PID file, default file: " + consts.PID_FILE) parser.add_argument("--no-dbus", action = "store_true", help = "do not attach to DBus") parser.add_argument("--profile", "-p", action = "store", type=str, metavar = "name", help = "tuning profile to be activated") parser.add_argument('--version', "-v", action = "version", version = "%%(prog)s %s.%s.%s" % (ver.TUNED_VERSION_MAJOR, ver.TUNED_VERSION_MINOR, ver.TUNED_VERSION_PATCH)) args = parser.parse_args(sys.argv[1:]) if os.geteuid() != 0: error("Superuser permissions are required to run the daemon.") sys.exit(1) config = GlobalConfig() log = tuned.logs.get() if args.debug: log.setLevel("DEBUG") try: if args.daemon: if args.log is None: args.log = consts.LOG_FILE log.switch_to_file(args.log) else: if args.log is not None: log.switch_to_file(args.log) app = tuned.daemon.Application(args.profile, config) # no daemon mode doesn't need DBus if not config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): args.no_dbus = True if not args.no_dbus: app.attach_to_dbus(consts.DBUS_BUS, consts.DBUS_OBJECT, consts.DBUS_INTERFACE) # always write PID file if args.pid is None: args.pid = consts.PID_FILE if args.daemon: app.daemonize(args.pid) else: app.write_pid_file(args.pid) app.run(args.daemon) except tuned.exceptions.TunedException as exception: if (args.debug): traceback.print_exc() else: error(str(exception)) sys.exit(1) tuned-2.10.0/tuned.service000066400000000000000000000005701331721725100154160ustar00rootroot00000000000000[Unit] Description=Dynamic System Tuning Daemon After=systemd-sysctl.service network.target dbus.service Requires=dbus.service polkit.service Conflicts=cpupower.service Documentation=man:tuned(8) man:tuned.conf(5) man:tuned-adm(8) [Service] Type=dbus PIDFile=/run/tuned/tuned.pid BusName=com.redhat.tuned ExecStart=/usr/sbin/tuned -l -P [Install] WantedBy=multi-user.target tuned-2.10.0/tuned.spec000066400000000000000000001034501331721725100147110ustar00rootroot00000000000000%bcond_with snapshot %if 0%{?fedora} %if 0%{?fedora} > 27 %bcond_without python3 %else %bcond_with python3 %endif %else %if 0%{?rhel} && 0%{?rhel} < 8 %bcond_with python3 %else %bcond_without python3 %endif %endif %if %{with python3} %global _py python3 %else %{!?python2_sitelib:%global python2_sitelib %{python_sitelib}} %if 0%{?rhel} && 0%{?rhel} < 8 %global _py python %else %global _py python2 %endif %endif %if %{with snapshot} %if 0%{!?git_short_commit:1} %global git_short_commit %(git rev-parse --short=8 --verify HEAD) %endif %global git_date %(date +'%Y%m%d') %global git_suffix %{git_date}git%{git_short_commit} %endif #%%global prerelease rc #%%global prereleasenum 1 %global prerel1 %{?prerelease:.%{prerelease}%{prereleasenum}} %global prerel2 %{?prerelease:-%{prerelease}.%{prereleasenum}} Summary: A dynamic adaptive system tuning daemon Name: tuned Version: 2.10.0 Release: 1%{?prerel1}%{?with_snapshot:.%{git_suffix}}%{?dist} License: GPLv2+ Source0: https://github.com/redhat-performance/%{name}/archive/v%{version}%{?prerel2}.tar.gz#/%{name}-%{version}%{?prerel1}.tar.gz URL: http://www.tuned-project.org/ BuildArch: noarch BuildRequires: systemd, desktop-file-utils Requires(post): systemd, virt-what Requires(preun): systemd Requires(postun): systemd BuildRequires: %{_py}, %{_py}-devel Requires: %{_py}-decorator, %{_py}-pyudev, %{_py}-configobj Requires: %{_py}-schedutils, %{_py}-linux-procfs, %{_py}-perf # requires for packages with inconsistent python2/3 names %if %{with python3} Requires: python3-dbus, python3-gobject-base %else Requires: dbus-python, pygobject3-base %endif Requires: virt-what, ethtool, gawk, hdparm Requires: util-linux, dbus, polkit %if 0%{?fedora} > 22 || 0%{?rhel} > 7 Recommends: kernel-tools %endif %description The tuned package contains a daemon that tunes system settings dynamically. It does so by monitoring the usage of several system components periodically. Based on that information components will then be put into lower or higher power saving modes to adapt to the current usage. Currently only ethernet network and ATA harddisk devices are implemented. %if 0%{?rhel} <= 7 && 0%{!?fedora:1} # RHEL <= 7 %global docdir %{_docdir}/%{name}-%{version} %else # RHEL > 7 || fedora %global docdir %{_docdir}/%{name} %endif %package gtk Summary: GTK GUI for tuned Requires: %{name} = %{version}-%{release} Requires: powertop, polkit # requires for packages with inconsistent python2/3 names %if %{with python3} Requires: python3-gobject-base %else Requires: pygobject3-base %endif %description gtk GTK GUI that can control tuned and provides simple profile editor. %package utils Requires: %{name} = %{version}-%{release} Requires: powertop Summary: Various tuned utilities %description utils This package contains utilities that can help you to fine tune and debug your system and manage tuned profiles. %package utils-systemtap Summary: Disk and net statistic monitoring systemtap scripts Requires: %{name} = %{version}-%{release} Requires: systemtap %description utils-systemtap This package contains several systemtap scripts to allow detailed manual monitoring of the system. Instead of the typical IO/sec it collects minimal, maximal and average time between operations to be able to identify applications that behave power inefficient (many small operations instead of fewer large ones). %package profiles-sap Summary: Additional tuned profile(s) targeted to SAP NetWeaver loads Requires: %{name} = %{version} %description profiles-sap Additional tuned profile(s) targeted to SAP NetWeaver loads. %package profiles-mssql Summary: Additional tuned profile(s) for MS SQL Server Requires: %{name} = %{version} %description profiles-mssql Additional tuned profile(s) for MS SQL Server. %package profiles-oracle Summary: Additional tuned profile(s) targeted to Oracle loads Requires: %{name} = %{version} %description profiles-oracle Additional tuned profile(s) targeted to Oracle loads. %package profiles-sap-hana Summary: Additional tuned profile(s) targeted to SAP HANA loads Requires: %{name} = %{version} %description profiles-sap-hana Additional tuned profile(s) targeted to SAP HANA loads. %package profiles-atomic Summary: Additional tuned profile(s) targeted to Atomic Requires: %{name} = %{version} %description profiles-atomic Additional tuned profile(s) targeted to Atomic host and guest. %package profiles-realtime Summary: Additional tuned profile(s) targeted to realtime Requires: %{name} = %{version} Requires: tuna %description profiles-realtime Additional tuned profile(s) targeted to realtime. %package profiles-nfv-guest Summary: Additional tuned profile(s) targeted to Network Function Virtualization (NFV) guest Requires: %{name} = %{version} Requires: %{name}-profiles-realtime = %{version} Requires: tuna %description profiles-nfv-guest Additional tuned profile(s) targeted to Network Function Virtualization (NFV) guest. %package profiles-nfv-host Summary: Additional tuned profile(s) targeted to Network Function Virtualization (NFV) host Requires: %{name} = %{version} Requires: %{name}-profiles-realtime = %{version} Requires: tuna %if 0%{?rhel} == 7 Requires: qemu-kvm-tools-rhev %else Recommends: tuned-profiles-nfv-host-bin %endif %description profiles-nfv-host Additional tuned profile(s) targeted to Network Function Virtualization (NFV) host. # this is kept for backward compatibility, it should be dropped for RHEL-8 %package profiles-nfv Summary: Additional tuned profile(s) targeted to Network Function Virtualization (NFV) Requires: %{name} = %{version} Requires: %{name}-profiles-nfv-guest = %{version} Requires: %{name}-profiles-nfv-host = %{version} %description profiles-nfv Additional tuned profile(s) targeted to Network Function Virtualization (NFV). %package profiles-cpu-partitioning Summary: Additional tuned profile(s) optimized for CPU partitioning Requires: %{name} = %{version} %description profiles-cpu-partitioning Additional tuned profile(s) optimized for CPU partitioning. %package profiles-compat Summary: Additional tuned profiles mainly for backward compatibility with tuned 1.0 Requires: %{name} = %{version} %description profiles-compat Additional tuned profiles mainly for backward compatibility with tuned 1.0. It can be also used to fine tune your system for specific scenarios. %prep %setup -q -n %{name}-%{version}%{?prerel2} %build %install make install DESTDIR=%{buildroot} DOCDIR=%{docdir} \ %if %{with python3} PYTHON=python3 %else PYTHON=python2 %endif %if 0%{?rhel} sed -i 's/\(dynamic_tuning[ \t]*=[ \t]*\).*/\10/' %{buildroot}%{_sysconfdir}/tuned/tuned-main.conf %endif # conditional support for grub2, grub2 is not available on all architectures # and tuned is noarch package, thus the following hack is needed mkdir -p %{buildroot}%{_datadir}/tuned/grub2 mv %{buildroot}%{_sysconfdir}/grub.d/00_tuned %{buildroot}%{_datadir}/tuned/grub2/00_tuned rmdir %{buildroot}%{_sysconfdir}/grub.d # ghost for persistent storage mkdir -p %{buildroot}%{_var}/lib/tuned # ghost for NFV mkdir -p %{buildroot}%{_sysconfdir}/modprobe.d touch %{buildroot}%{_sysconfdir}/modprobe.d/kvm.rt.tuned.conf # validate desktop file desktop-file-validate %{buildroot}%{_datadir}/applications/tuned-gui.desktop %post %systemd_post tuned.service # convert active_profile from full path to name (if needed) sed -i 's|.*/\([^/]\+\)/[^\.]\+\.conf|\1|' /etc/tuned/active_profile # convert GRUB_CMDLINE_LINUX to GRUB_CMDLINE_LINUX_DEFAULT if [ -r "%{_sysconfdir}/default/grub" ]; then sed -i 's/GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX \\$tuned_params"/GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT \\$tuned_params"/' \ %{_sysconfdir}/default/grub fi %preun %systemd_preun tuned.service if [ "$1" == 0 ]; then # clear persistent storage rm -f %{_var}/lib/tuned/* # clear temporal storage rm -f /run/tuned/* fi %postun %systemd_postun_with_restart tuned.service # conditional support for grub2, grub2 is not available on all architectures # and tuned is noarch package, thus the following hack is needed if [ "$1" == 0 ]; then rm -f %{_sysconfdir}/grub.d/00_tuned || : # unpatch /etc/default/grub if [ -r "%{_sysconfdir}/default/grub" ]; then sed -i '/GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT:+$GRUB_CMDLINE_LINUX_DEFAULT }\\$tuned_params"/d' %{_sysconfdir}/default/grub fi fi %triggerun -- tuned < 2.0-0 # remove ktune from old tuned, now part of tuned /usr/sbin/service ktune stop &>/dev/null || : /usr/sbin/chkconfig --del ktune &>/dev/null || : %posttrans # conditional support for grub2, grub2 is not available on all architectures # and tuned is noarch package, thus the following hack is needed if [ -d %{_sysconfdir}/grub.d ]; then cp -a %{_datadir}/tuned/grub2/00_tuned %{_sysconfdir}/grub.d/00_tuned selinuxenabled &>/dev/null && \ restorecon %{_sysconfdir}/grub.d/00_tuned &>/dev/null || : fi %post gtk /bin/touch --no-create %{_datadir}/icons/hicolor &>/dev/null || : %postun gtk if [ $1 -eq 0 ] ; then /bin/touch --no-create %{_datadir}/icons/hicolor &>/dev/null /usr/bin/gtk-update-icon-cache -f %{_datadir}/icons/hicolor &>/dev/null || : fi %posttrans gtk /usr/bin/gtk-update-icon-cache -f %{_datadir}/icons/hicolor &>/dev/null || : %files %defattr(-,root,root,-) %exclude %{docdir}/README.utils %exclude %{docdir}/README.scomes %exclude %{docdir}/README.NFV %doc %{docdir} %{_datadir}/bash-completion/completions/tuned-adm %if %{with python3} %exclude %{python3_sitelib}/tuned/gtk %{python3_sitelib}/tuned %else %exclude %{python2_sitelib}/tuned/gtk %{python2_sitelib}/tuned %endif %{_sbindir}/tuned %{_sbindir}/tuned-adm %exclude %{_sysconfdir}/tuned/realtime-variables.conf %exclude %{_sysconfdir}/tuned/realtime-virtual-guest-variables.conf %exclude %{_sysconfdir}/tuned/realtime-virtual-host-variables.conf %exclude %{_sysconfdir}/tuned/cpu-partitioning-variables.conf %exclude %{_sysconfdir}/tuned/sap-hana-vmware-variables.conf %exclude %{_prefix}/lib/tuned/default %exclude %{_prefix}/lib/tuned/desktop-powersave %exclude %{_prefix}/lib/tuned/laptop-ac-powersave %exclude %{_prefix}/lib/tuned/server-powersave %exclude %{_prefix}/lib/tuned/laptop-battery-powersave %exclude %{_prefix}/lib/tuned/enterprise-storage %exclude %{_prefix}/lib/tuned/spindown-disk %exclude %{_prefix}/lib/tuned/sap-netweaver %exclude %{_prefix}/lib/tuned/sap-hana %exclude %{_prefix}/lib/tuned/sap-hana-vmware %exclude %{_prefix}/lib/tuned/mssql %exclude %{_prefix}/lib/tuned/oracle %exclude %{_prefix}/lib/tuned/atomic-host %exclude %{_prefix}/lib/tuned/atomic-guest %exclude %{_prefix}/lib/tuned/realtime %exclude %{_prefix}/lib/tuned/realtime-virtual-guest %exclude %{_prefix}/lib/tuned/realtime-virtual-host %exclude %{_prefix}/lib/tuned/cpu-partitioning %{_prefix}/lib/tuned %dir %{_sysconfdir}/tuned %dir %{_sysconfdir}/tuned/recommend.d %dir %{_libexecdir}/tuned %{_libexecdir}/tuned/defirqaffinity* %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/active_profile %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/profile_mode %config(noreplace) %{_sysconfdir}/tuned/tuned-main.conf %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/bootcmdline %{_sysconfdir}/dbus-1/system.d/com.redhat.tuned.conf %verify(not size mtime md5) %{_sysconfdir}/modprobe.d/tuned.conf %{_tmpfilesdir}/tuned.conf %{_unitdir}/tuned.service %dir %{_localstatedir}/log/tuned %dir /run/tuned %dir %{_var}/lib/tuned %{_mandir}/man5/tuned* %{_mandir}/man7/tuned-profiles.7* %{_mandir}/man8/tuned* %dir %{_datadir}/tuned %{_datadir}/tuned/grub2 %{_datadir}/polkit-1/actions/com.redhat.tuned.policy %ghost %{_sysconfdir}/modprobe.d/kvm.rt.tuned.conf %files gtk %defattr(-,root,root,-) %{_sbindir}/tuned-gui %if %{with python3} %{python3_sitelib}/tuned/gtk %else %{python2_sitelib}/tuned/gtk %endif %{_datadir}/tuned/ui %{_datadir}/polkit-1/actions/com.redhat.tuned.gui.policy %{_datadir}/icons/hicolor/scalable/apps/tuned.svg %{_datadir}/applications/tuned-gui.desktop %files utils %doc COPYING %{_bindir}/powertop2tuned %{_libexecdir}/tuned/pmqos-static* %files utils-systemtap %defattr(-,root,root,-) %doc doc/README.utils %doc doc/README.scomes %doc COPYING %{_sbindir}/varnetload %{_sbindir}/netdevstat %{_sbindir}/diskdevstat %{_sbindir}/scomes %{_mandir}/man8/varnetload.* %{_mandir}/man8/netdevstat.* %{_mandir}/man8/diskdevstat.* %{_mandir}/man8/scomes.* %files profiles-sap %defattr(-,root,root,-) %{_prefix}/lib/tuned/sap-netweaver %{_mandir}/man7/tuned-profiles-sap.7* %files profiles-sap-hana %defattr(-,root,root,-) %config(noreplace) %{_sysconfdir}/tuned/sap-hana-vmware-variables.conf %{_prefix}/lib/tuned/sap-hana %{_prefix}/lib/tuned/sap-hana-vmware %{_mandir}/man7/tuned-profiles-sap-hana.7* %files profiles-mssql %defattr(-,root,root,-) %{_prefix}/lib/tuned/mssql %{_mandir}/man7/tuned-profiles-mssql.7* %files profiles-oracle %defattr(-,root,root,-) %{_prefix}/lib/tuned/oracle %{_mandir}/man7/tuned-profiles-oracle.7* %files profiles-atomic %defattr(-,root,root,-) %{_prefix}/lib/tuned/atomic-host %{_prefix}/lib/tuned/atomic-guest %{_mandir}/man7/tuned-profiles-atomic.7* %files profiles-realtime %defattr(-,root,root,-) %config(noreplace) %{_sysconfdir}/tuned/realtime-variables.conf %{_prefix}/lib/tuned/realtime %{_mandir}/man7/tuned-profiles-realtime.7* %files profiles-nfv-guest %defattr(-,root,root,-) %config(noreplace) %{_sysconfdir}/tuned/realtime-virtual-guest-variables.conf %{_prefix}/lib/tuned/realtime-virtual-guest %{_mandir}/man7/tuned-profiles-nfv-guest.7* %files profiles-nfv-host %defattr(-,root,root,-) %config(noreplace) %{_sysconfdir}/tuned/realtime-virtual-host-variables.conf %{_prefix}/lib/tuned/realtime-virtual-host %{_mandir}/man7/tuned-profiles-nfv-host.7* %files profiles-nfv %defattr(-,root,root,-) %doc %{docdir}/README.NFV %files profiles-cpu-partitioning %defattr(-,root,root,-) %config(noreplace) %{_sysconfdir}/tuned/cpu-partitioning-variables.conf %{_prefix}/lib/tuned/cpu-partitioning %{_mandir}/man7/tuned-profiles-cpu-partitioning.7* %files profiles-compat %defattr(-,root,root,-) %{_prefix}/lib/tuned/default %{_prefix}/lib/tuned/desktop-powersave %{_prefix}/lib/tuned/laptop-ac-powersave %{_prefix}/lib/tuned/server-powersave %{_prefix}/lib/tuned/laptop-battery-powersave %{_prefix}/lib/tuned/enterprise-storage %{_prefix}/lib/tuned/spindown-disk %{_mandir}/man7/tuned-profiles-compat.7* %changelog * Wed Jul 4 2018 Jaroslav Škarvada - 2.10.0-1 - new release - rebased tuned to latest upstream related: rhbz#1546598 - IRQ affinity handled by scheduler plugin resolves: rhbz#1590937 * Mon Jun 11 2018 Jaroslav Škarvada - 2.10.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1546598 - script: show stderr output in the log resolves: rhbz#1536476 - realtime-virtual-host: script.sh: add error checking resolves: rhbz#1461509 - man: improved tuned-profiles-cpu-partitioning.7 resolves: rhbz#1548148 - bootloader: check if grub2_cfg_file_name is None in _remove_grub2_tuning() resolves: rhbz#1571403 - plugin_scheduler: whitelist/blacklist processed also for thread names resolves: rhbz#1512295 - bootloader: patch all GRUB2 config files resolves: rhbz#1556990 - profiles: added mssql profile resolves: rhbz#1442122 - tuned-adm: print log excerpt when changing profile resolves: rhbz#1538745 - cpu-partitioning: use no_balance_cores instead of no_rebalance_cores resolves: rhbz#1550573 - sysctl: support assignment modifiers as other plugins do resolves: rhbz#1564092 - oracle: fixed ip_local_port_range parity warning resolves: rhbz#1527219 - Fix verifying cpumask on systems with more than 32 cores resolves: rhbz#1528368 - oracle: updated the profile to be in sync with KCS 39188 resolves: rhbz#1447323 * Sun Oct 29 2017 Jaroslav Škarvada - 2.9.0-1 - new release - rebased tuned to latest upstream related: rhbz#1467576 * Fri Oct 20 2017 Jaroslav Škarvada - 2.9.0-0.2.rc2 - new release - rebased tuned to latest upstream related: rhbz#1467576 - fixed expansion of the variables in the 'devices' section related: rhbz#1490399 - cpu-partitioning: add no_rebalance_cores= option resolves: rhbz#1497182 * Thu Oct 12 2017 Jaroslav Škarvada - 2.9.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1467576 - added recommend.d functionality resolves: rhbz#1459146 - recommend: added support for matching of processes resolves: rhbz#1461838 - plugin_video: added support for the 'dpm' power method resolves: rhbz#1417659 - list available profiles on 'tuned-adm profile' resolves: rhbz#988433 - cpu-partitioning: used tuned instead of tuna for cores isolation resolves: rhbz#1442229 - inventory: added workaround for pyudev < 0.18 resolves: rhbz#1251240 - realtime: used skew_tick=1 in kernel cmdline resolves: rhbz#1447938 - realtime-virtual-guest: re-assigned kernel thread priorities resolves: rhbz#1452357 - bootloader: splitted string for removal from cmdline resolves: rhbz#1461279 - network-latency: added skew_tick=1 kernel command line parameter resolves: rhbz#1451073 - bootloader: accepted only certain values for initrd_remove_dir resolves: rhbz#1455161 - increased udev monitor buffer size, made it configurable resolves: rhbz#1442306 - bootloader: don't add nonexistent overlay image to grub.cfg resolves: rhbz#1454340 - plugin_cpu: don't log error in execute() if EPB is not supported resolves: rhbz#1443182 - sap-hana: fixed description of the sap-hana profiles resolves: rhbz#1482005 - plugin_systemd: on full_rollback notify about need of initrd regeneration resolves: rhbz#1469258 - don't log errors about missing files on verify with ignore_missing set resolves: rhbz#1451435 - plugin_scheduler: improved logging resolves: rhbz#1474961 - improved checking if we are rebooting or not resolves: rhbz#1475571 - started dbus exports after a profile is applied resolves: rhbz#1443142 - sap-hana: changed force_latency to 70 resolves: rhbz#1501252 * Fri Apr 7 2017 Jaroslav Škarvada - 2.8.0-1 - new release - rebase tuned to latest upstream resolves: rhbz#1388454 - cpu-partitioning: enabled timer migration resolves: rhbz#1408308 - cpu-partitioning: disabled kvmclock sync and ple resolves: rhbz#1395855 - spec: muted error if there is no selinux support resolves: rhbz#1404214 - units: implemented instance priority resolves: rhbz#1246172 - bootloader: added support for initrd overlays resolves: rhbz#1414098 - cpu-partitioning: set CPUAffinity early in initrd image resolves: rhbz#1394965 - cpu-partitioning: set workqueue affinity early resolves: rhbz#1395899 - scsi_host: fixed probing of ALPM, missing ALPM logged as info resolves: rhbz#1416712 - added new profile cpu-partitioning resolves: rhbz#1359956 - bootloader: improved inheritance resolves: rhbz#1274464 - units: mplemented udev-based regexp device matching resolves: rhbz#1251240 - units: introduced pre_script, post_script resolves: rhbz#1246176 - realtime-virtual-host: accommodate new ktimersoftd thread resolves: rhbz#1332563 - defirqaffinity: fixed traceback due to syntax error resolves: rhbz#1369791 - variables: support inheritance of variables resolves: rhbz#1433496 - scheduler: added support for cores isolation resolves: rhbz#1403309 - tuned-profiles-nfv splitted to host/guest and dropped unneeded dependency resolves: rhbz#1413111 - desktop: fixed typo in profile summary resolves: rhbz#1421238 - with systemd don't do full rollback on shutdown / reboot resolves: rhbz#1421286 - builtin functions: added virt_check function and support to include resolves: rhbz#1426654 - cpulist_present: explicitly sorted present CPUs resolves: rhbz#1432240 - plugin_scheduler: fixed initialization resolves: rhbz#1433496 - log errors when applying a profile fails resolves: rhbz#1434360 - systemd: added support for older systemd CPUAffinity syntax resolves: rhbz#1441791 - scheduler: added workarounds for low level exceptions from python-linux-procfs resolves: rhbz#1441792 - bootloader: workaround for adding tuned_initrd to new kernels on restart resolves: rhbz#1441797 * Tue Aug 2 2016 Jaroslav Škarvada - 2.7.1-1 - new-release - gui: fixed traceback caused by DBus paths copy&paste error related: rhbz#1356369 - tuned-adm: fixed traceback of 'tuned-adm list' if daemon is not running resolves: rhbz#1358857 - tuned-adm: fixed profile_info traceback resolves: rhbz#1362481 * Tue Jul 19 2016 Jaroslav Škarvada - 2.7.0-1 - new-release - gui: fixed save profile resolves: rhbz#1242491 - tuned-adm: added --ignore-missing parameter resolves: rhbz#1243807 - plugin_vm: added transparent_hugepage alias resolves: rhbz#1249610 - plugins: added modules plugin resolves: rhbz#1249618 - plugin_cpu: do not show error if cpupower or x86_energy_perf_policy are missing resolves: rhbz#1254417 - tuned-adm: fixed restart attempt if tuned is not running resolves: rhbz#1258755 - nfv: avoided race condition by using synchronous mode resolves: rhbz#1259039 - realtime: added check for isolcpus sanity resolves: rhbz#1264128 - pm_qos: fixed exception if PM_QoS is not available resolves: rhbz#1296137 - plugin_sysctl: reapply system sysctl after Tuned sysctl are applied resolves: rhbz#1302953 - atomic: increase number of inotify watches resolves: rhbz#1322001 - realtime-virtual-host/guest: added rcu_nocbs kernel boot parameter resolves: rhbz#1334479 - realtime: fixed kernel.sched_rt_runtime_us to be -1 resolves: rhbz#1346715 - tuned-adm: fixed detection of no_daemon mode resolves: rhbz#1351536 - plugin_base: correctly strip assignment modifiers even if not used resolves: rhbz#1353142 - plugin_disk: try to workaround embedded '/' in device names related: rhbz#1353142 - sap-hana: explicitly setting kernel.numa_balancing = 0 for better performance resolves: rhbz#1355768 - switched to polkit authorization resolves: rhbz#1095142 - plugins: added scsi_host plugin resolves: rhbz#1246992 - spec: fixed conditional support for grub2 to work with selinux resolves: rhbz#1351937 - gui: added tuned icon and desktop file resolves: rhbz#1356369 * Tue Jan 5 2016 Jaroslav Škarvada - 2.6.0-1 - new-release - plugin_cpu: do not show error if cpupower or x86_energy_perf_policy are missing - plugin_sysctl: fixed quoting of sysctl values resolves: rhbz#1254538 - tuned-adm: added log file location hint to verify command output - libexec: fixed listdir and isdir in defirqaffinity.py resolves: rhbz#1252160 - plugin_cpu: save and restore only intel pstate attributes that were changed resolves: rhbz#1252156 - functions: fixed sysfs save to work with options resolves: rhbz#1251507 - plugins: added scsi_host plugin - tuned-adm: fixed restart attempt if tuned is not running - spec: fixed post scriptlet to work without grub resolves: rhbz#1265654 - tuned-profiles-nfv: fix find-lapictscdeadline-optimal.sh for CPUS where ns > 6500 resolves: rhbz#1267284 - functions: fixed restore_logs_syncing to preserve SELinux context on rsyslog.conf resolves: rhbz#1268901 - realtime: set unboud workqueues cpumask resolves: rhbz#1259043 - spec: correctly remove tuned footprint from /etc/default/grub resolves: rhbz#1268845 - gui: fixed creation of new profile resolves: rhbz#1274609 - profiles: removed nohz_full from the realtime profile resolves: rhbz#1274486 - profiles: Added nohz_full and nohz=on to realtime guest/host profiles resolves: rhbz#1274445 - profiles: fixed lapic_timer_adv_ns cache resolves: rhbz#1259452 - plugin_sysctl: pass verification even if the option doesn't exist related: rhbz#1252153 - added support for 'summary' and 'description' of profiles, extended D-Bus API for Cockpit related: rhbz#1228356 * Tue Aug 4 2015 Jaroslav Škarvada - 2.5.1-1 - new-release related: rhbz#1155052 - plugin_scheduler: work with nohz_full resolves: rhbz#1247184 - fixed realtime-virtual-guest/host profiles packaged twice resolves: rhbz#1249028 - fixed requirements of realtime and nfv profiles - fixed tuned-gui not starting - various other minor fixes * Sun Jul 5 2015 Jaroslav Škarvada - 2.5.0-1 - new-release resolves: rhbz#1155052 - add support for ethtool -C to tuned network plugin resolves: rhbz#1152539 - add support for ethtool -K to tuned network plugin resolves: rhbz#1152541 - add support for calculation of values for the kernel command line resolves: rhbz#1191595 - no error output if there is no hdparm installed resolves: rhbz#1191775 - do not run hdparm on hotplug events if there is no hdparm tuning resolves: rhbz#1193682 - add oracle tuned profile resolves: rhbz#1196298 - fix bash completions for tuned-adm resolves: rhbz#1207668 - add glob support to tuned sysfs plugin resolves: rhbz#1212831 - add tuned-adm verify subcommand resolves: rhbz#1212836 - do not install tuned kernel command line to rescue kernels resolves: rhbz#1223864 - add variables support resolves: rhbz#1225124 - add built-in support for unit conversion into tuned resolves: rhbz#1225135 - fix vm.max_map_count setting in sap-netweaver profile resolves: rhbz#1228562 - add tuned profile for RHEL-RT resolves: rhbz#1228801 - plugin_scheduler: added support for runtime tuning of processes resolves: rhbz#1148546 - add support for changing elevators on xvd* devices (Amazon EC2) resolves: rhbz#1170152 - add workaround to be run after systemd-sysctl resolves: rhbz#1189263 - do not change settings of transparent hugepages if set in kernel cmdline resolves: rhbz#1189868 - add tuned profiles for RHEL-NFV resolves: rhbz#1228803 - plugin_bootloader: apply $tuned_params to existing kernels resolves: rhbz#1233004 * Thu Oct 16 2014 Jaroslav Škarvada - 2.4.1-1 - new-release - fixed return code of tuned grub template resolves: rhbz#1151768 - plugin_bootloader: fix for multiple parameters on command line related: rhbz#1148711 - tuned-adm: fixed traceback on "tuned-adm list" resolves: rhbz#1149162 - plugin_bootloader is automatically disabled if grub2 is not found resolves: rhbz#1150047 - plugin_disk: set_spindown and set_APM made independent resolves: rhbz#976725 * Wed Oct 1 2014 Jaroslav Škarvada - 2.4.0-1 - new-release resolves: rhbz#1093883 - fixed traceback if profile cannot be loaded related: rhbz#953128 - powertop2tuned: fixed traceback if rewriting file instead of dir resolves: rhbz#963441 - daemon: fixed race condition in start/stop - improved timings, it can be fine tuned in /etc/tuned/tuned-main.conf resolves: rhbz#1028122 - throughput-performance: altered dirty ratios for better performance resolves: rhbz#1043533 - latency-performance: leaving THP on its default resolves: rhbz#1064510 - used throughput-performance profile on server by default resolves: rhbz#1063481 - network-latency: added new profile resolves: rhbz#1052418 - network-throughput: added new profile resolves: rhbz#1052421 - recommend.conf: fixed config file resolves: rhbz#1069123 - spec: added kernel-tools requirement resolves: rhbz#1073008 - systemd: added cpupower.service conflict resolves: rhbz#1073392 - balanced: used medium_power ALPM policy - added support for >, < assignment modifiers in tuned.conf - handled root block devices - balanced: used conservative CPU governor resolves: rhbz#1124125 - plugins: added selinux plugin - plugin_net: added nf_conntrack_hashsize parameter - profiles: added atomic-host profile resolves: rhbz#1091977 - profiles: added atomic-guest profile resolves: rhbz#1091979 - moved profile autodetection from post install script to tuned daemon resolves: rhbz#1144067 - profiles: included sap-hana and sap-hana-vmware profiles - man: structured profiles manual pages according to sub-packages - added missing hdparm dependency resolves: rhbz#1144858 - improved error handling of switch_profile resolves: rhbz#1068699 - tuned-adm: active: detect whether tuned deamon is running related: rhbz#1068699 - removed active_profile from RPM verification resolves: rhbz#1104126 - plugin_disk: readahead value can be now specified in sectors resolves: rhbz#1127127 - plugins: added bootloader plugin resolves: rhbz#1044111 - plugin_disk: added error counter to hdparm calls - plugins: added scheduler plugin resolves: rhbz#1100826 - added tuned-gui * Wed Nov 6 2013 Jaroslav Škarvada - 2.3.0-1 - new-release resolves: rhbz#1020743 - audio plugin: fixed audio settings in standard profiles resolves: rhbz#1019805 - video plugin: fixed tunings - daemon: fixed crash if preset profile is not available resolves: rhbz#953128 - man: various updates and corrections - functions: fixed usb and bluetooth handling - tuned: switched to lightweighted pygobject3-base - daemon: added global config for dynamic_tuning resolves: rhbz#1006427 - utils: added pmqos-static script for debug purposes resolves: rhbz#1015676 - throughput-performance: various fixes resolves: rhbz#987570 - tuned: added global option update_interval - plugin_cpu: added support for x86_energy_perf_policy resolves: rhbz#1015675 - dbus: fixed KeyboardInterrupt handling - plugin_cpu: added support for intel_pstate resolves: rhbz#996722 - profiles: various fixes resolves: rhbz#922068 - profiles: added desktop profile resolves: rhbz#996723 - tuned-adm: implemented non DBus fallback control - profiles: added sap profile - tuned: lowered CPU usage due to python bug resolves: rhbz#917587 * Tue Mar 19 2013 Jaroslav Škarvada - 2.2.2-1 - new-release: - cpu plugin: fixed cpupower workaround - cpu plugin: fixed crash if cpupower is installed * Fri Mar 1 2013 Jaroslav Škarvada - 2.2.1-1 - new release: - audio plugin: fixed error handling in _get_timeout - removed cpupower dependency, added sysfs fallback - powertop2tuned: fixed parser crash on binary garbage resolves: rhbz#914933 - cpu plugin: dropped multicore_powersave as kernel upstream already did - plugins: options manipulated by dynamic tuning are now correctly saved and restored - powertop2tuned: added alias -e for --enable option - powertop2tuned: new option -m, --merge-profile to select profile to merge - prefer transparent_hugepage over redhat_transparent_hugepage - recommend: use recommend.conf not autodetect.conf - tuned.service: switched to dbus type service resolves: rhbz#911445 - tuned: new option --pid, -P to write PID file - tuned, tuned-adm: added new option --version, -v to show version - disk plugin: use APM value 254 for cleanup / APM disable instead of 255 resolves: rhbz#905195 - tuned: new option --log, -l to select log file - powertop2tuned: avoid circular deps in include (one level check only) - powertop2tuned: do not crash if powertop is not installed - net plugin: added support for wake_on_lan static tuning resolves: rhbz#885504 - loader: fixed error handling - spec: used systemd-rpm macros resolves: rhbz#850347 * Mon Jan 28 2013 Jan Vcelak 2.2.0-1 - new release: - remove nobarrier from virtual-guest (data loss prevention) - devices enumeration via udev, instead of manual retrieval - support for dynamically inserted devices (currently disk plugin) - dropped rfkill plugins (bluetooth and wifi), the code didn't work * Wed Jan 2 2013 Jaroslav Škarvada - 2.1.2-1 - new release: - systemtap {disk,net}devstat: fix typo in usage - switched to configobj parser - latency-performance: disabled THP - fixed fd leaks on subprocesses * Thu Dec 06 2012 Jan Vcelak 2.1.1-1 - fix: powertop2tuned execution - fix: ownership of /etc/tuned * Mon Dec 03 2012 Jan Vcelak 2.1.0-1 - new release: - daemon: allow running without selected profile - daemon: fix profile merging, allow only safe characters in profile names - daemon: implement missing methods in DBus interface - daemon: implement profile recommendation - daemon: improve daemonization, PID file handling - daemon: improved device matching in profiles, negation possible - daemon: various internal improvements - executables: check for EUID instead of UID - executables: run python with -Es to increase security - plugins: cpu - fix cpupower execution - plugins: disk - fix option setting - plugins: mounts - new, currently supports only barriers control - plugins: sysctl - fix a bug preventing settings application - powertop2tuned: speedup, fix crashes with non-C locales - powertop2tuned: support for powertop 2.2 output - profiles: progress on replacing scripts with plugins - tuned-adm: bash completion - suggest profiles from all supported locations - tuned-adm: complete switch to D-bus - tuned-adm: full control to users with physical access * Mon Oct 08 2012 Jaroslav Škarvada - 2.0.2-1 - New version - Systemtap scripts moved to utils-systemtap subpackage * Sun Jul 22 2012 Fedora Release Engineering - 2.0.1-4 - Rebuilt for https://fedoraproject.org/wiki/Fedora_18_Mass_Rebuild * Tue Jun 12 2012 Jaroslav Škarvada - 2.0.1-3 - another powertop-2.0 compatibility fix Resolves: rhbz#830415 * Tue Jun 12 2012 Jan Kaluza - 2.0.1-2 - fixed powertop2tuned compatibility with powertop-2.0 * Tue Apr 03 2012 Jaroslav Škarvada - 2.0.1-1 - new version * Fri Mar 30 2012 Jan Vcelak 2.0-1 - first stable release tuned-2.10.0/tuned.tmpfiles000066400000000000000000000000701331721725100155740ustar00rootroot00000000000000# tuned runtime directory d /run/tuned 0755 root root - tuned-2.10.0/tuned/000077500000000000000000000000001331721725100140325ustar00rootroot00000000000000tuned-2.10.0/tuned/__init__.py000066400000000000000000000016711331721725100161500ustar00rootroot00000000000000# # tuned: daemon for monitoring and adaptive tuning of system devices # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # __copyright__ = "Copyright 2008-2013 Red Hat, Inc." __license__ = "GPLv2+" __email__ = "power-management@lists.fedoraproject.org" tuned-2.10.0/tuned/admin/000077500000000000000000000000001331721725100151225ustar00rootroot00000000000000tuned-2.10.0/tuned/admin/__init__.py000066400000000000000000000001161331721725100172310ustar00rootroot00000000000000from .admin import * from .exceptions import * from .dbus_controller import * tuned-2.10.0/tuned/admin/admin.py000066400000000000000000000265461331721725100166010ustar00rootroot00000000000000 from __future__ import print_function import tuned.admin from tuned.utils.commands import commands from tuned.profiles import Locator as profiles_locator from .exceptions import TunedAdminDBusException from tuned.exceptions import TunedException import tuned.consts as consts import os import sys import errno import time import threading import logging class Admin(object): def __init__(self, dbus = True, debug = False, async = False, timeout = consts.ADMIN_TIMEOUT, log_level = logging.ERROR): self._dbus = dbus self._debug = debug self._async = async self._timeout = timeout self._cmd = commands(debug) self._profiles_locator = profiles_locator(consts.LOAD_DIRECTORIES) self._daemon_action_finished = threading.Event() self._daemon_action_profile = "" self._daemon_action_result = True self._daemon_action_errstr = "" self._controller = None self._log_token = None self._log_level = log_level if self._dbus: self._controller = tuned.admin.DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT, debug) try: self._controller.set_signal_handler(consts.DBUS_SIGNAL_PROFILE_CHANGED, self._signal_profile_changed_cb) except TunedAdminDBusException as e: self._error(e) self._dbus = False def _error(self, message): print(message, file=sys.stderr) def _signal_profile_changed_cb(self, profile_name, result, errstr): # ignore successive signals if the signal is not yet processed if not self._daemon_action_finished.is_set(): self._daemon_action_profile = profile_name self._daemon_action_result = result self._daemon_action_errstr = errstr self._daemon_action_finished.set() def _tuned_is_running(self): try: os.kill(int(self._cmd.read_file(consts.PID_FILE)), 0) except OSError as e: return e.errno == errno.EPERM except (ValueError, IOError) as e: return False return True # run the action specified by the action_name with args def action(self, action_name, *args, **kwargs): if action_name is None or action_name == "": return False action = None action_dbus = None res = False try: action_dbus = getattr(self, "_action_dbus_" + action_name) except AttributeError as e: self._dbus = False try: action = getattr(self, "_action_" + action_name) except AttributeError as e: if not self._dbus: self._error(e + ", action '%s' is not implemented" % action_name) return False if self._dbus: try: self._controller.set_on_exit_action( self._log_capture_finish) self._controller.set_action(action_dbus, *args, **kwargs) res = self._controller.run() except TunedAdminDBusException as e: self._error(e) self._dbus = False if not self._dbus: res = action(*args, **kwargs) return res def _print_profiles(self, profile_names): print("Available profiles:") for profile in profile_names: if profile[1] is not None and profile[1] != "": print(self._cmd.align_str("- %s" % profile[0], 30, "- %s" % profile[1])) else: print("- %s" % profile[0]) def _action_dbus_list(self): try: profile_names = self._controller.profiles2() except TunedAdminDBusException as e: # fallback to older API profile_names = [(profile, "") for profile in self._controller.profiles()] self._print_profiles(profile_names) self._action_dbus_active() return self._controller.exit(True) def _action_list(self): self._print_profiles(self._profiles_locator.get_known_names_summary()) self._action_active() return True def _dbus_get_active_profile(self): profile_name = self._controller.active_profile() if profile_name == "": profile_name = None self._controller.exit(True) return profile_name def _get_active_profile(self): profile_name, manual = self._cmd.get_active_profile() return profile_name def _get_profile_mode(self): (profile, manual) = self._cmd.get_active_profile() if manual is None: manual = profile is not None return consts.ACTIVE_PROFILE_MANUAL if manual else consts.ACTIVE_PROFILE_AUTO def _print_profile_info(self, profile, profile_info): if profile_info[0] == True: print("Profile name:") print(profile_info[1]) print() print("Profile summary:") print(profile_info[2]) print() print("Profile description:") print(profile_info[3]) return True else: print("Unable to get information about profile '%s'" % profile) return False def _action_dbus_profile_info(self, profile = ""): if profile == "": profile = self._dbus_get_active_profile() return self._controller.exit(self._print_profile_info(profile, self._controller.profile_info(profile))) def _action_profile_info(self, profile = ""): if profile == "": try: profile = self._get_active_profile() if profile is None: print("No current active profile.") return False except TunedException as e: self._error(str(e)) return False return self._print_profile_info(profile, self._profiles_locator.get_profile_attrs(profile, [consts.PROFILE_ATTR_SUMMARY, consts.PROFILE_ATTR_DESCRIPTION], ["", ""])) def _print_profile_name(self, profile_name): if profile_name is None: print("No current active profile.") return False else: print("Current active profile: %s" % profile_name) return True def _action_dbus_active(self): return self._controller.exit(self._print_profile_name(self._dbus_get_active_profile())) def _action_active(self): try: profile_name = self._get_active_profile() except TunedException as e: self._error(str(e)) return False if profile_name is not None and not self._tuned_is_running(): print("It seems that tuned daemon is not running, preset profile is not activated.") print("Preset profile: %s" % profile_name) return True return self._print_profile_name(profile_name) def _print_profile_mode(self, mode): print("Profile selection mode: " + mode) def _action_dbus_profile_mode(self): mode, error = self._controller.profile_mode() self._print_profile_mode(mode) if error != "": self._error(error) return self._controller.exit(False) return self._controller.exit(True) def _action_profile_mode(self): try: mode = self._get_profile_mode() self._print_profile_mode(mode) return True except TunedException as e: self._error(str(e)) return False def _profile_print_status(self, ret, msg): if ret: if not self._controller.is_running() and not self._controller.start(): self._error("Cannot enable the tuning.") ret = False else: self._error(msg) return ret def _action_dbus_wait_profile(self, profile_name): if time.time() >= self._timestamp + self._timeout: print("Operation timed out after waiting %d seconds(s), you may try to increase timeout by using --timeout command line option or using --async." % self._timeout) return self._controller.exit(False) if self._daemon_action_finished.isSet(): if self._daemon_action_profile == profile_name: if not self._daemon_action_result: print("Error changing profile: %s" % self._daemon_action_errstr) return self._controller.exit(False) return self._controller.exit(True) return False def _log_capture_finish(self): if self._log_token is None or self._log_token == "": return try: log_msgs = self._controller.log_capture_finish( self._log_token) self._log_token = None print(log_msgs, end = "", file = sys.stderr) sys.stderr.flush() except TunedAdminDBusException as e: self._error("Error: Failed to stop log capture. Restart the Tuned daemon to prevent a memory leak.") def _action_dbus_profile(self, profiles): if len(profiles) == 0: return self._action_dbus_list() profile_name = " ".join(profiles) if profile_name == "": return self._controller.exit(False) self._daemon_action_finished.clear() if not self._async and self._log_level is not None: # 25 seconds default DBus timeout + 5 secs safety margin timeout = self._timeout + 25 + 5 self._log_token = self._controller.log_capture_start( self._log_level, timeout) (ret, msg) = self._controller.switch_profile(profile_name) if self._async or not ret: return self._controller.exit(self._profile_print_status(ret, msg)) else: self._timestamp = time.time() self._controller.set_action(self._action_dbus_wait_profile, profile_name) return self._profile_print_status(ret, msg) def _restart_tuned(self): print("Trying to (re)start tuned...") (ret, msg) = self._cmd.execute(["service", "tuned", "restart"]) if ret == 0: print("Tuned (re)started, changes applied.") else: print("Tuned (re)start failed, you need to (re)start tuned by hand for changes to apply.") def _set_profile(self, profile_name, manual): if profile_name in self._profiles_locator.get_known_names(): try: self._cmd.save_active_profile(profile_name, manual) self._restart_tuned() return True except TunedException as e: self._error(str(e)) self._error("Unable to switch profile.") return False else: self._error("Requested profile '%s' doesn't exist." % profile_name) return False def _action_profile(self, profiles): if len(profiles) == 0: return self._action_list() profile_name = " ".join(profiles) if profile_name == "": return False return self._set_profile(profile_name, True) def _action_dbus_auto_profile(self): profile_name = self._controller.recommend_profile() self._daemon_action_finished.clear() if not self._async and self._log_level is not None: # 25 seconds default DBus timeout + 5 secs safety margin timeout = self._timeout + 25 + 5 self._log_token = self._controller.log_capture_start( self._log_level, timeout) (ret, msg) = self._controller.auto_profile() if self._async or not ret: return self._controller.exit(self._profile_print_status(ret, msg)) else: self._timestamp = time.time() self._controller.set_action(self._action_dbus_wait_profile, profile_name) return self._profile_print_status(ret, msg) def _action_auto_profile(self): profile_name = self._cmd.recommend_profile() return self._set_profile(profile_name, False) def _action_dbus_recommend_profile(self): print(self._controller.recommend_profile()) return self._controller.exit(True) def _action_recommend_profile(self): print(self._cmd.recommend_profile()) return True def _action_dbus_verify_profile(self, ignore_missing): if ignore_missing: ret = self._controller.verify_profile_ignore_missing() else: ret = self._controller.verify_profile() if ret: print("Verfication succeeded, current system settings match the preset profile.") else: print("Verification failed, current system settings differ from the preset profile.") print("You can mostly fix this by restarting the Tuned daemon, e.g.:") print(" systemctl restart tuned") print("or") print(" service tuned restart") print("Sometimes (if some plugins like bootloader are used) a reboot may be required.") print("See tuned log file ('%s') for details." % consts.LOG_FILE) return self._controller.exit(ret) def _action_verify_profile(self, ignore_missing): print("Not supported in no_daemon mode.") return False def _action_dbus_off(self): # 25 seconds default DBus timeout + 5 secs safety margin timeout = 25 + 5 self._log_token = self._controller.log_capture_start( self._log_level, timeout) ret = self._controller.off() if not ret: self._error("Cannot disable active profile.") return self._controller.exit(ret) def _action_off(self): print("Not supported in no_daemon mode.") return False tuned-2.10.0/tuned/admin/dbus_controller.py000066400000000000000000000075551331721725100207100ustar00rootroot00000000000000import dbus import dbus.exceptions import time from dbus.mainloop.glib import DBusGMainLoop from gi.repository import GLib, GObject from .exceptions import TunedAdminDBusException __all__ = ["DBusController"] class DBusController(object): def __init__(self, bus_name, interface_name, object_name, debug = False): self._bus_name = bus_name self._interface_name = interface_name self._object_name = object_name self._proxy = None self._interface = None self._debug = debug self._main_loop = None self._action = None self._on_exit_action = None self._ret = True self._exit = False self._exception = None def _init_proxy(self): try: if self._proxy is None: DBusGMainLoop(set_as_default=True) self._main_loop = GLib.MainLoop() bus = dbus.SystemBus() self._proxy = bus.get_object(self._bus_name, self._object_name) self._interface = dbus.Interface(self._proxy, dbus_interface = self._interface_name) except dbus.exceptions.DBusException: raise TunedAdminDBusException("Cannot talk to Tuned daemon via DBus. Is Tuned daemon running?") def _idle(self): if self._action is not None: # This may (and very probably will) run in child thread, so catch and pass exceptions to the main thread try: self._action_exit_code = self._action(*self._action_args, **self._action_kwargs) except TunedAdminDBusException as e: self._exception = e self._exit = True if self._exit: if self._on_exit_action is not None: self._on_exit_action(*self._on_exit_action_args, **self._on_exit_action_kwargs) self._main_loop.quit() return False else: time.sleep(1) return True def set_on_exit_action(self, action, *args, **kwargs): self._on_exit_action = action self._on_exit_action_args = args self._on_exit_action_kwargs = kwargs def set_action(self, action, *args, **kwargs): self._action = action self._action_args = args self._action_kwargs = kwargs def run(self): self._exception = None GLib.idle_add(self._idle) self._main_loop.run() # Pass exception happened in child thread to the caller if self._exception is not None: raise self._exception return self._ret def _call(self, method_name, *args, **kwargs): self._init_proxy() try: method = self._interface.get_dbus_method(method_name) return method(*args, timeout=40) except dbus.exceptions.DBusException as dbus_exception: err_str = "DBus call to Tuned daemon failed" if self._debug: err_str += " (%s)" % str(dbus_exception) raise TunedAdminDBusException(err_str) def set_signal_handler(self, signal, cb): self._init_proxy() self._proxy.connect_to_signal(signal, cb) def is_running(self): return self._call("is_running") def start(self): return self._call("start") def stop(self): return self._call("stop") def profiles(self): return self._call("profiles") def profiles2(self): return self._call("profiles2") def profile_info(self, profile_name): return self._call("profile_info", profile_name) def log_capture_start(self, log_level, timeout): return self._call("log_capture_start", log_level, timeout) def log_capture_finish(self, token): return self._call("log_capture_finish", token) def active_profile(self): return self._call("active_profile") def profile_mode(self): return self._call("profile_mode") def switch_profile(self, new_profile): if new_profile == "": return (False, "No profile specified") return self._call("switch_profile", new_profile) def auto_profile(self): return self._call("auto_profile") def recommend_profile(self): return self._call("recommend_profile") def verify_profile(self): return self._call("verify_profile") def verify_profile_ignore_missing(self): return self._call("verify_profile_ignore_missing") def off(self): return self._call("disable") def exit(self, ret): self.set_action(None) self._ret = ret self._exit = True return ret tuned-2.10.0/tuned/admin/exceptions.py000066400000000000000000000001371331721725100176560ustar00rootroot00000000000000import tuned.exceptions class TunedAdminDBusException(tuned.exceptions.TunedException): pass tuned-2.10.0/tuned/consts.py000066400000000000000000000104121331721725100157130ustar00rootroot00000000000000import logging GLOBAL_CONFIG_FILE = "/etc/tuned/tuned-main.conf" ACTIVE_PROFILE_FILE = "/etc/tuned/active_profile" PROFILE_MODE_FILE = "/etc/tuned/profile_mode" PROFILE_FILE = "tuned.conf" RECOMMEND_CONF_FILE = "/etc/tuned/recommend.conf" DAEMONIZE_PARENT_TIMEOUT = 5 NAMESPACE = "com.redhat.tuned" DBUS_BUS = NAMESPACE DBUS_INTERFACE = "com.redhat.tuned.control" DBUS_OBJECT = "/Tuned" DEFAULT_PROFILE = "balanced" DEFAULT_STORAGE_FILE = "/run/tuned/save.pickle" LOAD_DIRECTORIES = ["/usr/lib/tuned", "/etc/tuned"] PERSISTENT_STORAGE_DIR = "/var/lib/tuned" PLUGIN_MAIN_UNIT_NAME = "main" RECOMMEND_DIRECTORIES = ["/usr/lib/tuned/recommend.d", "/etc/tuned/recommend.d"] TMP_FILE_SUFFIX = ".tmp" # max. number of consecutive errors to give up ERROR_THRESHOLD = 3 # bootloader plugin configuration BOOT_DIR = "/boot" GRUB2_CFG_FILES = ["/etc/grub2.cfg", "/etc/grub2-efi.cfg"] GRUB2_CFG_DIR = "/etc/grub.d" GRUB2_TUNED_TEMPLATE_NAME = "00_tuned" GRUB2_TUNED_TEMPLATE_PATH = GRUB2_CFG_DIR + "/" + GRUB2_TUNED_TEMPLATE_NAME GRUB2_TEMPLATE_HEADER_BEGIN = "### BEGIN /etc/grub.d/" + GRUB2_TUNED_TEMPLATE_NAME + " ###" GRUB2_TEMPLATE_HEADER_END = "### END /etc/grub.d/" + GRUB2_TUNED_TEMPLATE_NAME + " ###" GRUB2_TUNED_VAR = "tuned_params" GRUB2_TUNED_INITRD_VAR = "tuned_initrd" GRUB2_DEFAULT_ENV_FILE = "/etc/default/grub" INITRD_IMAGE_DIR = "/boot" BOOT_CMDLINE_TUNED_VAR = "TUNED_BOOT_CMDLINE" BOOT_CMDLINE_INITRD_ADD_VAR = "TUNED_BOOT_INITRD_ADD" BOOT_CMDLINE_FILE = "/etc/tuned/bootcmdline" PETITBOOT_DETECT_DIR = "/sys/firmware/opal" # modules plugin configuration MODULES_FILE = "/etc/modprobe.d/tuned.conf" # systemd plugin configuration SYSTEMD_SYSTEM_CONF_FILE = "/etc/systemd/system.conf" SYSTEMD_CPUAFFINITY_VAR = "CPUAffinity" # number of backups LOG_FILE_COUNT = 2 LOG_FILE_MAXBYTES = 100*1000 LOG_FILE = "/var/log/tuned/tuned.log" PID_FILE = "/run/tuned/tuned.pid" SYSTEM_RELEASE_FILE = "/etc/system-release-cpe" # prefix for functions plugins FUNCTION_PREFIX = "function_" # prefix for exported environment variables when calling scripts ENV_PREFIX = "TUNED_" # tuned-gui PREFIX_PROFILE_FACTORY = "Factory" PREFIX_PROFILE_USER = "User" CFG_DAEMON = "daemon" CFG_DYNAMIC_TUNING = "dynamic_tuning" CFG_SLEEP_INTERVAL = "sleep_interval" CFG_UPDATE_INTERVAL = "update_interval" CFG_RECOMMEND_COMMAND = "recommend_command" CFG_REAPPLY_SYSCTL = "reapply_sysctl" CFG_DEFAULT_INSTANCE_PRIORITY = "default_instance_priority" CFG_UDEV_BUFFER_SIZE = "udev_buffer_size" # no_daemon mode CFG_DEF_DAEMON = True # default configuration CFG_DEF_DYNAMIC_TUNING = True # how long to sleep before checking for events (in seconds) CFG_DEF_SLEEP_INTERVAL = 1 # update interval for dynamic tuning (in seconds) CFG_DEF_UPDATE_INTERVAL = 10 # recommend command availability CFG_DEF_RECOMMEND_COMMAND = True # reapply system sysctl CFG_DEF_REAPPLY_SYSCTL = True # default instance priority CFG_DEF_DEFAULT_INSTANCE_PRIORITY = 0 # default pyudev.Monitor buffer size CFG_DEF_UDEV_BUFFER_SIZE = 1024 * 1024 PATH_CPU_DMA_LATENCY = "/dev/cpu_dma_latency" # profile attributes which can be specified in the main section PROFILE_ATTR_SUMMARY = "summary" PROFILE_ATTR_DESCRIPTION = "description" DBUS_SIGNAL_PROFILE_CHANGED = "profile_changed" STR_HINT_REBOOT = "you need to reboot for changes to take effect" STR_VERIFY_PROFILE_DEVICE_VALUE_OK = "verify: passed: device %s: '%s' = '%s'" STR_VERIFY_PROFILE_VALUE_OK = "verify: passed: '%s' = '%s'" STR_VERIFY_PROFILE_OK = "verify: passed: '%s'" STR_VERIFY_PROFILE_DEVICE_VALUE_MISSING = "verify: skipped, missing: device %s: '%s'" STR_VERIFY_PROFILE_VALUE_MISSING = "verify: skipped, missing: '%s'" STR_VERIFY_PROFILE_DEVICE_VALUE_FAIL = "verify: failed: device %s: '%s' = '%s', expected '%s'" STR_VERIFY_PROFILE_VALUE_FAIL = "verify: failed: '%s' = '%s', expected '%s'" STR_VERIFY_PROFILE_FAIL = "verify: failed: '%s'" # timout for tuned-adm operations in seconds ADMIN_TIMEOUT = 600 # Strings for /etc/tuned/profile_mode specifying if the active profile # was set automatically or manually ACTIVE_PROFILE_AUTO = "auto" ACTIVE_PROFILE_MANUAL = "manual" LOG_LEVEL_CONSOLE = 60 LOG_LEVEL_CONSOLE_NAME = "CONSOLE" CAPTURE_LOG_LEVEL = "console" CAPTURE_LOG_LEVELS = { "debug": logging.DEBUG, "info": logging.INFO, "warn": logging.WARN, "error": logging.ERROR, "console": LOG_LEVEL_CONSOLE, "none": None, } tuned-2.10.0/tuned/daemon/000077500000000000000000000000001331721725100152755ustar00rootroot00000000000000tuned-2.10.0/tuned/daemon/__init__.py000066400000000000000000000001131331721725100174010ustar00rootroot00000000000000from .application import * from .controller import * from .daemon import * tuned-2.10.0/tuned/daemon/application.py000066400000000000000000000152731331721725100201620ustar00rootroot00000000000000from tuned import storage, units, monitors, plugins, profiles, exports, hardware from tuned.exceptions import TunedException import tuned.logs from . import controller from . import daemon import signal import os import sys import select import struct import tuned.consts as consts from tuned.utils.global_config import GlobalConfig log = tuned.logs.get() __all__ = ["Application"] class Application(object): def __init__(self, profile_name = None, config = None): self._dbus_exporter = None storage_provider = storage.PickleProvider() storage_factory = storage.Factory(storage_provider) self.config = GlobalConfig() if config is None else config if self.config.get_bool(consts.CFG_DYNAMIC_TUNING): log.info("dynamic tuning is enabled (can be overridden in plugins)") else: log.info("dynamic tuning is globally disabled") monitors_repository = monitors.Repository() udev_buffer_size = self.config.get_size("udev_buffer_size", consts.CFG_DEF_UDEV_BUFFER_SIZE) hardware_inventory = hardware.Inventory(buffer_size=udev_buffer_size) device_matcher = hardware.DeviceMatcher() device_matcher_udev = hardware.DeviceMatcherUdev() plugin_instance_factory = plugins.instance.Factory() self.variables = profiles.variables.Variables() plugins_repository = plugins.Repository(monitors_repository, storage_factory, hardware_inventory,\ device_matcher, device_matcher_udev, plugin_instance_factory, self.config, self.variables) def_instance_priority = int(self.config.get(consts.CFG_DEFAULT_INSTANCE_PRIORITY, consts.CFG_DEF_DEFAULT_INSTANCE_PRIORITY)) unit_manager = units.Manager(plugins_repository, monitors_repository, def_instance_priority) profile_factory = profiles.Factory() profile_merger = profiles.Merger() profile_locator = profiles.Locator(consts.LOAD_DIRECTORIES) profile_loader = profiles.Loader(profile_locator, profile_factory, profile_merger, self.config, self.variables) self._daemon = daemon.Daemon(unit_manager, profile_loader, profile_name, self.config, self) self._controller = controller.Controller(self._daemon, self.config) self._init_signals() self._pid_file = None def _handle_signal(self, signal_number, handler): def handler_wrapper(_signal_number, _frame): if signal_number == _signal_number: handler() signal.signal(signal_number, handler_wrapper) def _init_signals(self): self._handle_signal(signal.SIGHUP, self._controller.reload) self._handle_signal(signal.SIGINT, self._controller.terminate) self._handle_signal(signal.SIGTERM, self._controller.terminate) def attach_to_dbus(self, bus_name, object_name, interface_name): if self._dbus_exporter is not None: raise TunedException("DBus interface is already initialized.") self._dbus_exporter = exports.dbus.DBusExporter(bus_name, interface_name, object_name) exports.register_exporter(self._dbus_exporter) exports.register_object(self._controller) def _daemonize_parent(self, parent_in_fd, child_out_fd): """ Wait till the child signalizes that the initialization is complete by writing some uninteresting data into the pipe. """ os.close(child_out_fd) (read_ready, drop, drop) = select.select([parent_in_fd], [], [], consts.DAEMONIZE_PARENT_TIMEOUT) if len(read_ready) != 1: os.close(parent_in_fd) raise TunedException("Cannot daemonize, timeout when waiting for the child process.") response = os.read(parent_in_fd, 8) os.close(parent_in_fd) if len(response) == 0: raise TunedException("Cannot daemonize, no response from child process received.") try: val = struct.unpack("?", response)[0] except struct.error: raise TunedException("Cannot daemonize, invalid response from child process received.") if val != True: raise TunedException("Cannot daemonize, child process reports failure.") def write_pid_file(self, pid_file = consts.PID_FILE): self._pid_file = pid_file self._delete_pid_file() try: dir_name = os.path.dirname(self._pid_file) if not os.path.exists(dir_name): os.makedirs(dir_name) with os.fdopen(os.open(self._pid_file, os.O_CREAT|os.O_TRUNC|os.O_WRONLY , 0o644), "w") as f: f.write("%d" % os.getpid()) except (OSError,IOError) as error: log.critical("cannot write the PID to %s: %s" % (self._pid_file, str(error))) def _delete_pid_file(self): if os.path.exists(self._pid_file): try: os.unlink(self._pid_file) except OSError as error: log.warning("cannot remove existing PID file %s, %s" % (self._pid_file, str(error))) def _daemonize_child(self, pid_file, parent_in_fd, child_out_fd): """ Finishes daemonizing process, writes a PID file and signalizes to the parent that the initialization is complete. """ os.close(parent_in_fd) os.chdir("/") os.setsid() os.umask(0) try: pid = os.fork() if pid > 0: sys.exit(0) except OSError as error: log.critical("cannot daemonize, fork() error: %s" % str(error)) val = struct.pack("?", False) os.write(child_out_fd, val) os.close(child_out_fd) raise TunedException("Cannot daemonize, second fork() failed.") fd = open("/dev/null", "w+") os.dup2(fd.fileno(), sys.stdin.fileno()) os.dup2(fd.fileno(), sys.stdout.fileno()) os.dup2(fd.fileno(), sys.stderr.fileno()) self.write_pid_file(pid_file) log.debug("successfully daemonized") val = struct.pack("?", True) os.write(child_out_fd, val) os.close(child_out_fd) def daemonize(self, pid_file = consts.PID_FILE): """ Daemonizes the application. In case of failure, TunedException is raised in the parent process. If the operation is successfull, the main process is terminated and only child process returns from this method. """ parent_child_fds = os.pipe() try: child_pid = os.fork() except OSError as error: os.close(parent_child_fds[0]) os.close(parent_child_fds[1]) raise TunedException("Cannot daemonize, fork() failed.") try: if child_pid > 0: self._daemonize_parent(*parent_child_fds) sys.exit(0) else: self._daemonize_child(pid_file, *parent_child_fds) except: # pass exceptions only into parent process if child_pid > 0: raise else: sys.exit(1) @property def daemon(self): return self._daemon @property def controller(self): return self._controller def run(self, daemon): # override global config if ran from command line with daemon option (-d) if daemon: self.config.set(consts.CFG_DAEMON, True) if not self.config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): log.warn("Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config") result = self._controller.run() if self.config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): exports.stop() if self._pid_file is not None: self._delete_pid_file() return result tuned-2.10.0/tuned/daemon/controller.py000066400000000000000000000163311331721725100200360ustar00rootroot00000000000000from tuned import exports import tuned.logs import tuned.exceptions from tuned.exceptions import TunedException import threading import tuned.consts as consts from tuned.utils.commands import commands __all__ = ["Controller"] log = tuned.logs.get() class TimerStore(object): def __init__(self): self._timers = dict() self._timers_lock = threading.Lock() def store_timer(self, token, timer): with self._timers_lock: self._timers[token] = timer def drop_timer(self, token): with self._timers_lock: try: timer = self._timers[token] timer.cancel() del self._timers[token] except: pass def cancel_all(self): with self._timers_lock: for timer in self._timers.values(): timer.cancel() self._timers.clear() class Controller(tuned.exports.interfaces.ExportableInterface): """ Controller's purpose is to keep the program running, start/stop the tuning, and export the controller interface (currently only over D-Bus). """ def __init__(self, daemon, global_config): super(Controller, self).__init__() self._daemon = daemon self._global_config = global_config self._terminate = threading.Event() self._cmd = commands() self._timer_store = TimerStore() def run(self): """ Controller main loop. The call is blocking. """ log.info("starting controller") res = self.start() daemon = self._global_config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) if not res and daemon: exports.start() if daemon: self._terminate.clear() # we have to pass some timeout, otherwise signals will not work while not self._cmd.wait(self._terminate, 3600): pass log.info("terminating controller") self.stop() def terminate(self): self._terminate.set() @exports.signal("sbs") def profile_changed(self, profile_name, result, errstr): pass # exports decorator checks the authorization (currently through polkit), caller is None if # no authorization was performed (i.e. the call should process as authorized), string # identifying caller (with DBus it's the caller bus name) if authorized and empty # string if not authorized, caller must be the last argument def _log_capture_abort(self, token): tuned.logs.log_capture_finish(token) self._timer_store.drop_timer(token) @exports.export("ii", "s") def log_capture_start(self, log_level, timeout, caller = None): if caller == "": return "" token = tuned.logs.log_capture_start(log_level) if token is None: return "" if timeout > 0: timer = threading.Timer(timeout, self._log_capture_abort, args = [token]) self._timer_store.store_timer(token, timer) timer.start() return "" if token is None else token @exports.export("s", "s") def log_capture_finish(self, token, caller = None): if caller == "": return "" res = tuned.logs.log_capture_finish(token) self._timer_store.drop_timer(token) return "" if res is None else res @exports.export("", "b") def start(self, caller = None): if caller == "": return False if self._global_config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): if self._daemon.is_running(): return True elif not self._daemon.is_enabled(): return False return self._daemon.start() @exports.export("", "b") def stop(self, caller = None): if caller == "": return False if not self._daemon.is_running(): res = True else: res = self._daemon.stop() self._timer_store.cancel_all() return res @exports.export("", "b") def reload(self, caller = None): if caller == "": return False if not self._daemon.is_running(): return False else: return self.stop() and self.start() def _switch_profile(self, profile_name, manual): was_running = self._daemon.is_running() msg = "OK" success = True reapply = False try: if was_running: self._daemon.stop(profile_switch = True) self._daemon.set_profile(profile_name, manual) except tuned.exceptions.TunedException as e: success = False msg = str(e) if was_running and self._daemon.profile.name == profile_name: log.error("Failed to reapply profile '%s'. Did it change on disk and break?" % profile_name) reapply = True else: log.error("Failed to apply profile '%s'" % profile_name) finally: if was_running: if reapply: log.warn("Applying previously applied (possibly out-dated) profile '%s'." % profile_name) elif not success: log.info("Applying previously applied profile.") self._daemon.start() return (success, msg) @exports.export("s", "(bs)") def switch_profile(self, profile_name, caller = None): if caller == "": return (False, "Unauthorized") return self._switch_profile(profile_name, True) @exports.export("", "(bs)") def auto_profile(self, caller = None): if caller == "": return (False, "Unauthorized") profile_name = self.recommend_profile() return self._switch_profile(profile_name, False) @exports.export("", "s") def active_profile(self, caller = None): if caller == "": return "" if self._daemon.profile is not None: return self._daemon.profile.name else: return "" @exports.export("", "(ss)") def profile_mode(self, caller = None): if caller == "": return "unknown", "Unauthorized" manual = self._daemon.manual if manual is None: # This means no profile is applied. Check the preset value. try: profile, manual = self._cmd.get_active_profile() if manual is None: manual = profile is not None except TunedException as e: mode = "unknown" error = str(e) return mode, error mode = consts.ACTIVE_PROFILE_MANUAL if manual else consts.ACTIVE_PROFILE_AUTO return mode, "" @exports.export("", "b") def disable(self, caller = None): if caller == "": return False if self._daemon.is_running(): self._daemon.stop() if self._daemon.is_enabled(): self._daemon.set_profile(None, None, save_instantly=True) return True @exports.export("", "b") def is_running(self, caller = None): if caller == "": return False return self._daemon.is_running() @exports.export("", "as") def profiles(self, caller = None): if caller == "": return [] return self._daemon.profile_loader.profile_locator.get_known_names() @exports.export("", "a(ss)") def profiles2(self, caller = None): if caller == "": return [] return self._daemon.profile_loader.profile_locator.get_known_names_summary() @exports.export("s", "(bsss)") def profile_info(self, profile_name, caller = None): if caller == "": return tuple(False, "", "", "") if profile_name is None or profile_name == "": profile_name = self.active_profile() return tuple(self._daemon.profile_loader.profile_locator.get_profile_attrs(profile_name, [consts.PROFILE_ATTR_SUMMARY, consts.PROFILE_ATTR_DESCRIPTION], [""])) @exports.export("", "s") def recommend_profile(self, caller = None): if caller == "": return "" return self._cmd.recommend_profile(hardcoded = not self._global_config.get_bool(consts.CFG_RECOMMEND_COMMAND, consts.CFG_DEF_RECOMMEND_COMMAND)) @exports.export("", "b") def verify_profile(self, caller = None): if caller == "": return False return self._daemon.verify_profile(ignore_missing = False) @exports.export("", "b") def verify_profile_ignore_missing(self, caller = None): if caller == "": return False return self._daemon.verify_profile(ignore_missing = True) tuned-2.10.0/tuned/daemon/daemon.py000066400000000000000000000220551331721725100171160ustar00rootroot00000000000000import os import errno import threading import tuned.logs from tuned.exceptions import TunedException from tuned.profiles.exceptions import InvalidProfileException import tuned.consts as consts from tuned.utils.commands import commands from tuned import exports import re log = tuned.logs.get() class Daemon(object): def __init__(self, unit_manager, profile_loader, profile_names=None, config=None, application=None): log.debug("initializing daemon") self._daemon = consts.CFG_DEF_DAEMON self._sleep_interval = int(consts.CFG_DEF_SLEEP_INTERVAL) self._update_interval = int(consts.CFG_DEF_UPDATE_INTERVAL) self._dynamic_tuning = consts.CFG_DEF_DYNAMIC_TUNING self._recommend_command = True if config is not None: self._daemon = config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) self._sleep_interval = int(config.get(consts.CFG_SLEEP_INTERVAL, consts.CFG_DEF_SLEEP_INTERVAL)) self._update_interval = int(config.get(consts.CFG_UPDATE_INTERVAL, consts.CFG_DEF_UPDATE_INTERVAL)) self._dynamic_tuning = config.get_bool(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING) self._recommend_command = config.get_bool(consts.CFG_RECOMMEND_COMMAND, consts.CFG_DEF_RECOMMEND_COMMAND) self._application = application if self._sleep_interval <= 0: self._sleep_interval = int(consts.CFG_DEF_SLEEP_INTERVAL) if self._update_interval == 0: self._dynamic_tuning = False elif self._update_interval < self._sleep_interval: self._update_interval = self._sleep_interval self._sleep_cycles = self._update_interval // self._sleep_interval log.info("using sleep interval of %d second(s)" % self._sleep_interval) if self._dynamic_tuning: log.info("dynamic tuning is enabled (can be overridden by plugins)") log.info("using update interval of %d second(s) (%d times of the sleep interval)" % (self._sleep_cycles * self._sleep_interval, self._sleep_cycles)) self._unit_manager = unit_manager self._profile_loader = profile_loader self._init_threads() self._cmd = commands() try: self._init_profile(profile_names) except TunedException as e: log.error("Cannot set initial profile. No tunings will be enabled: %s" % e) def _init_threads(self): self._thread = None self._terminate = threading.Event() # Flag which is set if terminating due to profile_switch self._terminate_profile_switch = threading.Event() # Flag which is set if there is no operation in progress self._not_used = threading.Event() self._not_used.set() self._profile_applied = threading.Event() def _init_profile(self, profile_names): manual = True if profile_names is None: (profile_names, manual) = self._get_startup_profile() if profile_names is None: log.info("No profile is preset, running in manual mode. No profile will be enabled.") # Passed through '-p' cmdline option elif profile_names == "": log.info("No profile will be enabled.") self._profile = None self._manual = None self.set_profile(profile_names, manual) def set_profile(self, profile_names, manual, save_instantly=False): if self.is_running(): raise TunedException(self._notify_profile_changed(profile_names, False, "Cannot set profile while the daemon is running.")) if profile_names == "" or profile_names is None: self._profile = None self._manual = manual else: profile_list = profile_names.split() for profile in profile_list: if profile not in self.profile_loader.profile_locator.get_known_names(): raise TunedException(self._notify_profile_changed(\ profile_names, False,\ "Requested profile '%s' doesn't exist." % profile)) try: self._profile = self._profile_loader.load(profile_names) self._manual = manual except InvalidProfileException as e: raise TunedException(self._notify_profile_changed(profile_names, False, "Cannot load profile(s) '%s': %s" % (profile_names, e))) if save_instantly: if profile_names is None: profile_names = "" self._save_active_profile(profile_names, manual) @property def profile(self): return self._profile @property def manual(self): return self._manual @property def profile_loader(self): return self._profile_loader # send notification when profile is changed (everything is setup) or if error occured # result: True - OK, False - error occured def _notify_profile_changed(self, profile_names, result, errstr): if self._application is not None and self._application._dbus_exporter is not None: self._application._dbus_exporter.send_signal(consts.DBUS_SIGNAL_PROFILE_CHANGED, profile_names, result, errstr) return errstr def _full_rollback_required(self): retcode, out = self._cmd.execute(["systemctl", "is-system-running"], no_errors = [0]) if retcode < 0: return False if out[:8] == "stopping": return False retcode, out = self._cmd.execute(["systemctl", "list-jobs"], no_errors = [0]) return re.search(r"\b(shutdown|reboot|halt|poweroff)\.target.*start", out) is None def _thread_code(self): if self._profile is None: raise TunedException("Cannot start the daemon without setting a profile.") self._unit_manager.create(self._profile.units) self._save_active_profile(self._profile.name, self._manual) self._unit_manager.start_tuning() self._profile_applied.set() log.info("static tuning from profile '%s' applied" % self._profile.name) if self._daemon: exports.start() self._notify_profile_changed(self._profile.name, True, "OK") if self._daemon: # In python 2 interpreter with applied patch for rhbz#917709 we need to periodically # poll, otherwise the python will not have chance to update events / locks (due to GIL) # and e.g. DBus control will not work. The polling interval of 1 seconds (which is # the default) is still much better than 50 ms polling with unpatched interpreter. # For more details see tuned rhbz#917587. _sleep_cnt = self._sleep_cycles while not self._cmd.wait(self._terminate, self._sleep_interval): if self._dynamic_tuning: _sleep_cnt -= 1 if _sleep_cnt <= 0: _sleep_cnt = self._sleep_cycles log.debug("updating monitors") self._unit_manager.update_monitors() log.debug("performing tunings") self._unit_manager.update_tuning() self._profile_applied.clear() # wait for others to complete their tasks, use timeout 3 x sleep_interval to prevent # deadlocks i = 0 while not self._cmd.wait(self._not_used, self._sleep_interval) and i < 3: i += 1 # if terminating due to profile switch if self._terminate_profile_switch.is_set(): full_rollback = True else: # with systemd it detects system shutdown and in such case it doesn't perform # full cleanup, if not shutting down it means that Tuned was explicitly # stopped by user and in such case do full cleanup, without systemd never # do full cleanup full_rollback = False if self._full_rollback_required(): log.info("terminating Tuned, rolling back all changes") full_rollback = True else: log.info("terminating Tuned due to system shutdown / reboot") if self._daemon: self._unit_manager.stop_tuning(full_rollback) self._unit_manager.destroy_all() def _save_active_profile(self, profile_names, manual): try: self._cmd.save_active_profile(profile_names, manual) except TunedException as e: log.error(str(e)) def _get_recommended_profile(self): log.info("Running in automatic mode, checking what profile is recommended for your configuration.") profile = self._cmd.recommend_profile(hardcoded = not self._recommend_command) log.info("Using '%s' profile" % profile) return profile def _get_startup_profile(self): profile, manual = self._cmd.get_active_profile() if manual is None: manual = profile is not None if not manual: profile = self._get_recommended_profile() return profile, manual def is_enabled(self): return self._profile is not None def is_running(self): return self._thread is not None and self._thread.is_alive() def start(self): if self.is_running(): return False if self._profile is None: return False log.info("starting tuning") self._not_used.set() self._thread = threading.Thread(target=self._thread_code) self._terminate_profile_switch.clear() self._terminate.clear() self._thread.start() return True def verify_profile(self, ignore_missing): if not self.is_running(): log.error("tuned is not running") return False if self._profile is None: log.error("no profile is set") return False if not self._profile_applied.is_set(): log.error("profile is not applied") return False # using deamon, the main loop mustn't exit before our completion self._not_used.clear() log.info("verifying profile(s): %s" % self._profile.name) ret = self._unit_manager.verify_tuning(ignore_missing) # main loop is allowed to exit self._not_used.set() return ret # profile_switch is helper telling plugins whether the stop is due to profile switch def stop(self, profile_switch = False): if not self.is_running(): return False log.info("stopping tuning") if profile_switch: self._terminate_profile_switch.set() self._terminate.set() self._thread.join() self._thread = None return True tuned-2.10.0/tuned/exceptions.py000066400000000000000000000010701331721725100165630ustar00rootroot00000000000000import tuned.logs import sys import traceback exception_logger = tuned.logs.get() class TunedException(Exception): """ """ def log(self, logger = None): if logger is None: logger = exception_logger logger.error(str(self)) self._log_trace(logger) def _log_trace(self, logger): (exc_type, exc_value, exc_traceback) = sys.exc_info() if exc_value != self: logger.debug("stack trace is no longer available") else: exception_info = "".join(traceback.format_exception(exc_type, exc_value, exc_traceback)).rstrip() logger.debug(exception_info) tuned-2.10.0/tuned/exports/000077500000000000000000000000001331721725100155365ustar00rootroot00000000000000tuned-2.10.0/tuned/exports/__init__.py000066400000000000000000000017661331721725100176610ustar00rootroot00000000000000from . import interfaces from . import controller from . import dbus_exporter as dbus def export(*args, **kwargs): """Decorator, use to mark exportable methods.""" def wrapper(method): method.export_params = [ args, kwargs ] return method return wrapper def signal(*args, **kwargs): """Decorator, use to mark exportable signals.""" def wrapper(method): method.signal_params = [ args, kwargs ] return method return wrapper def register_exporter(instance): if not isinstance(instance, interfaces.ExporterInterface): raise Exception() ctl = controller.ExportsController.get_instance() return ctl.register_exporter(instance) def register_object(instance): if not isinstance(instance, interfaces.ExportableInterface): raise Exception() ctl = controller.ExportsController.get_instance() return ctl.register_object(instance) def start(): ctl = controller.ExportsController.get_instance() return ctl.start() def stop(): ctl = controller.ExportsController.get_instance() return ctl.stop() tuned-2.10.0/tuned/exports/controller.py000066400000000000000000000036421331721725100203000ustar00rootroot00000000000000from . import interfaces import inspect import tuned.patterns class ExportsController(tuned.patterns.Singleton): """ Controls and manages object interface exporting. """ def __init__(self): super(ExportsController, self).__init__() self._exporters = [] self._objects = [] self._exports_initialized = False def register_exporter(self, instance): """Register objects exporter.""" self._exporters.append(instance) def register_object(self, instance): """Register object to be exported.""" self._objects.append(instance) def _is_exportable_method(self, method): """Check if method was marked with @exports.export wrapper.""" return inspect.ismethod(method) and hasattr(method, "export_params") def _is_exportable_signal(self, method): """Check if method was marked with @exports.signal wrapper.""" return inspect.ismethod(method) and hasattr(method, "signal_params") def _export_method(self, method): """Register method to all exporters.""" for exporter in self._exporters: args = method.export_params[0] kwargs = method.export_params[1] exporter.export(method, *args, **kwargs) def _export_signal(self, method): """Register signal to all exporters.""" for exporter in self._exporters: args = method.signal_params[0] kwargs = method.signal_params[1] exporter.signal(method, *args, **kwargs) def _initialize_exports(self): if self._exports_initialized: return for instance in self._objects: for name, method in inspect.getmembers(instance, self._is_exportable_method): self._export_method(method) for name, method in inspect.getmembers(instance, self._is_exportable_signal): self._export_signal(method) self._exports_initialized = True def start(self): """Start the exports.""" self._initialize_exports() for exporter in self._exporters: exporter.start() def stop(self): """Stop the exports.""" for exporter in self._exporters: exporter.stop() tuned-2.10.0/tuned/exports/dbus_exporter.py000066400000000000000000000116611331721725100210020ustar00rootroot00000000000000from . import interfaces import decorator import dbus.service import dbus.mainloop.glib import dbus.exceptions import inspect import threading import signal import tuned.logs import tuned.consts as consts from tuned.utils.polkit import polkit from gi.repository import GLib log = tuned.logs.get() class DBusExporter(interfaces.ExporterInterface): """ Export method calls through DBus Interface. We take a method to be exported and create a simple wrapper function to call it. This is required as we need the original function to be bound to the original object instance. While the wrapper will be bound to an object we dynamically construct. """ def __init__(self, bus_name, interface_name, object_name): dbus.mainloop.glib.DBusGMainLoop(set_as_default=True) self._dbus_object_cls = None self._dbus_object = None self._dbus_methods = {} self._signals = set() self._bus_name = bus_name self._interface_name = interface_name self._object_name = object_name self._thread = None self._bus_object = None self._polkit = polkit() # dirty hack that fixes KeyboardInterrupt handling # the hack is needed because PyGObject / GTK+-3 developers are morons signal_handler = signal.getsignal(signal.SIGINT) self._main_loop = GLib.MainLoop() signal.signal(signal.SIGINT, signal_handler) @property def bus_name(self): return self._bus_name @property def interface_name(self): return self._interface_name @property def object_name(self): return self._object_name def running(self): return self._thread is not None def export(self, method, in_signature, out_signature): if not inspect.ismethod(method): raise Exception("Only bound methods can be exported.") method_name = method.__name__ if method_name in self._dbus_methods: raise Exception("Method with this name is already exported.") def wrapper(wrapped, owner, *args, **kwargs): action_id = consts.NAMESPACE + "." + method.__name__ caller = args[-1] log.debug("checking authorization for for action '%s' requested by caller '%s'" % (action_id, caller)) ret = self._polkit.check_authorization(caller, action_id) if ret == 1: log.debug("action '%s' requested by caller '%s' was successfully authorized by polkit" % (action_id, caller)) elif ret == 2: log.warn("polkit error, but action '%s' requested by caller '%s' was successfully authorized by fallback method" % (action_id, caller)) elif ret == 0: log.info("action '%s' requested by caller '%s' wasn't authorized, ignoring the request" % (action_id, caller)) args[-1] = "" elif ret == -1: log.warn("polkit error and action '%s' requested by caller '%s' wasn't authorized by fallback method, ignoring the request" % (action_id, caller)) args[-1] = "" else: log.error("polkit error and unable to use fallback method to authorize action '%s' requested by caller '%s', ignoring the request" % (action_id, caller)) args[-1] = "" return method(*args, **kwargs) wrapper = decorator.decorator(wrapper, method.__func__) wrapper = dbus.service.method(self._interface_name, in_signature, out_signature, sender_keyword = "caller")(wrapper) self._dbus_methods[method_name] = wrapper def signal(self, method, out_signature): if not inspect.ismethod(method): raise Exception("Only bound methods can be exported.") method_name = method.__name__ if method_name in self._dbus_methods: raise Exception("Method with this name is already exported.") def wrapper(wrapped, owner, *args, **kwargs): return method(*args, **kwargs) wrapper = decorator.decorator(wrapper, method.__func__) wrapper = dbus.service.signal(self._interface_name, out_signature)(wrapper) self._dbus_methods[method_name] = wrapper self._signals.add(method_name) def send_signal(self, signal, *args, **kwargs): err = False if not signal in self._signals or self._bus_object is None: err = True try: method = getattr(self._bus_object, signal) except AttributeError: err = True if err: raise Exception("Signal '%s' doesn't exist." % signal) else: method(*args, **kwargs) def _construct_dbus_object_class(self): if self._dbus_object_cls is not None: raise Exception("The exporter class was already build.") unique_name = "DBusExporter_%d" % id(self) cls = type(unique_name, (dbus.service.Object,), self._dbus_methods) self._dbus_object_cls = cls def start(self): if self.running(): return if self._dbus_object_cls is None: self._construct_dbus_object_class() self.stop() bus = dbus.SystemBus() bus_name = dbus.service.BusName(self._bus_name, bus) self._bus_object = self._dbus_object_cls(bus, self._object_name, bus_name) self._thread = threading.Thread(target=self._thread_code) self._thread.start() def stop(self): if self._thread is not None and self._thread.is_alive(): self._main_loop.quit() self._thread.join() self._thread = None def _thread_code(self): self._main_loop.run() del self._bus_object self._bus_object = None tuned-2.10.0/tuned/exports/interfaces.py000066400000000000000000000010531331721725100202320ustar00rootroot00000000000000class ExportableInterface(object): pass class ExporterInterface(object): def export(self, method, in_signature, out_signature): # to be overridden by concrete implementation raise NotImplementedError() def signal(self, method, out_signature): # to be overridden by concrete implementation raise NotImplementedError() def send_signal(self, signal, *args, **kwargs): # to be overridden by concrete implementation raise NotImplementedError() def start(self): raise NotImplementedError() def stop(self): raise NotImplementedError() tuned-2.10.0/tuned/gtk/000077500000000000000000000000001331721725100146175ustar00rootroot00000000000000tuned-2.10.0/tuned/gtk/__init__.py000066400000000000000000000000001331721725100167160ustar00rootroot00000000000000tuned-2.10.0/tuned/gtk/gui_plugin_loader.py000066400000000000000000000116211331721725100206620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2008-2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Mar 30, 2014 @author: mstana ''' import os from validate import Validator import tuned.plugins.base import tuned.consts as consts import tuned.logs import tuned.plugins.repository as repository import configobj as ConfigObj from tuned.exceptions import TunedException from tuned import plugins from tuned.utils.plugin_loader import PluginLoader from tuned import storage, units, monitors, plugins, profiles, exports, \ hardware import tuned.plugins as Plugins __all__ = ['GuiPluginLoader'] global_config_spec = ['dynamic_tuning = boolean(default=%s)' % consts.CFG_DEF_DYNAMIC_TUNING, 'sleep_interval = integer(default=%s)' % consts.CFG_DEF_SLEEP_INTERVAL, 'update_interval = integer(default=%s)' % consts.CFG_DEF_UPDATE_INTERVAL] class GuiPluginLoader(PluginLoader): ''' Class for scan, import and load actual avaible plugins. ''' def __init__(self): ''' Constructor ''' self._plugins = set() self.plugins_doc = {} storage_provider = storage.PickleProvider() storage_factory = storage.Factory(storage_provider) monitors_repository = monitors.Repository() hardware_inventory = hardware.Inventory() device_matcher = hardware.DeviceMatcher() device_matcher_udev = hardware.DeviceMatcherUdev() plugin_instance_factory = plugins.instance.Factory() self.repo = repository.Repository( monitors_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, None, None ) self._set_loader_parameters(), self.create_all(self._import_plugin_names()) @property def plugins(self): return self.repo.plugins def _set_loader_parameters(self): ''' Sets private atributes. ''' self._namespace = 'tuned.plugins' self._prefix = 'plugin_' self._sufix = '.py' self._interface = tuned.plugins.base.Plugin def _import_plugin_names(self): ''' Scan directories and find names to load ''' names = [] for name in os.listdir(Plugins.__path__[0]): file = name.split(self._prefix).pop() if file.endswith(self._sufix): (file_name, file_extension) = os.path.splitext(file) names.append(file_name) return names def create_all(self, names): for plugin_name in names: try: self._plugins.add(self.repo.create(plugin_name)) except ImportError: pass except tuned.plugins.exceptions.NotSupportedPluginException: # problem with importing of plugin # print(str(ImportError) + plugin_name) pass # print(plugin_name + " is not supported!") def get_plugin(self, plugin_name): for plugin in self.plugins: if plugin_name == plugin.name: return plugin def _load_global_config(self, file_name=consts.GLOBAL_CONFIG_FILE): """ Loads global configuration file. """ try: config = ConfigObj.ConfigObj(file_name, configspec=global_config_spec, raise_errors = True, file_error = True, list_values = False, interpolation = False) except IOError as e: raise TunedException("Global tuned configuration file '%s' not found." % file_name) except ConfigObj.ConfigObjError as e: raise TunedException("Error parsing global tuned configuration file '%s'." % file_name) vdt = Validator() if not config.validate(vdt, copy=True): raise TunedException("Global tuned configuration file '%s' is not valid." % file_name) return config tuned-2.10.0/tuned/gtk/gui_profile_loader.py000066400000000000000000000147761331721725100210420ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2008-2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Mar 13, 2014 @author: mstana ''' import configobj import os import tuned.profiles.profile as p import tuned.consts import shutil import tuned.gtk.managerException as managerException class GuiProfileLoader(object): """ Profiles loader for GUI Gtk purposes. """ profiles = {} def __init__(self, directories): self.directories = directories self._load_all_profiles() def get_raw_profile(self, profile_name): file = self._locate_profile_path(profile_name) + '/' \ + profile_name + '/' + tuned.consts.PROFILE_FILE with open(file, 'r') as f: return f.read() def set_raw_profile(self, profile_name, config): profilePath = self._locate_profile_path(profile_name) if profilePath == tuned.consts.LOAD_DIRECTORIES[1]: file = profilePath + '/' + profile_name + '/' + tuned.consts.PROFILE_FILE with open(file, 'w') as f: f.write(config) else: raise managerException.ManagerException(profile_name + ' profile is stored in ' + profilePath + ' and can not be storet do this location') def load_profile_config(self, profile_name, path): conf_path = path + '/' + profile_name + '/' + tuned.consts.PROFILE_FILE profile_config = configobj.ConfigObj(conf_path, list_values = False, interpolation = False) return profile_config def _locate_profile_path(self, profile_name): for d in self.directories: for profile in os.listdir(d): if os.path.isdir(d + '/' + profile) and profile \ == profile_name: path = d return path def _load_all_profiles(self): for d in self.directories: for profile in os.listdir(d): if os.path.isdir(d + '/' + profile): try: self.profiles[profile] = p.Profile(profile, self.load_profile_config(profile, d)) except configobj.ParseError: pass # print "can not make \""+ profile +"\" profile without correct config on path: " + d # except: # raise managerException.ManagerException("Can not make profile") # print "can not make \""+ profile +"\" profile without correct config with path: " + d def _refresh_profiles(self): self.profiles = {} self._load_all_profiles() def save_profile(self, profile): path = tuned.consts.LOAD_DIRECTORIES[1] + '/' + profile.name config = configobj.ConfigObj(list_values = False, interpolation = False) config.filename = path + '/' + tuned.consts.PROFILE_FILE config.initial_comment = ('#', 'tuned configuration', '#') try: config['main'] = profile.options except KeyError: config['main'] = '' # profile dont have main section pass for (name, unit) in list(profile.units.items()): config[name] = unit.options if not os.path.exists(path): os.makedirs(path) else: # you cant rewrite profile! raise managerException.ManagerException('Profile with name ' + profile.name + 'exists already') config.write() self._refresh_profiles() def update_profile( self, old_profile_name, profile, is_admin, ): if old_profile_name not in self.get_names(): raise managerException.ManagerException('Profile: ' + old_profile_name + ' is not in profiles') if self.is_profile_factory(old_profile_name): path = tuned.consts.LOAD_DIRECTORIES[0] + '/' + profile.name else: path = tuned.consts.LOAD_DIRECTORIES[1] + '/' + profile.name if old_profile_name != profile.name: self.remove_profile(old_profile_name, is_admin=is_admin) config = configobj.ConfigObj(list_values = False, interpolation = False) config.filename = path + '/' + tuned.consts.PROFILE_FILE config.initial_comment = ('#', 'tuned configuration', '#') try: config['main'] = profile.options except KeyError: # profile dont have main section pass for (name, unit) in list(profile.units.items()): config[name] = unit.options if not os.path.exists(path): os.makedirs(path) config.write() self._refresh_profiles() def get_names(self): return list(self.profiles.keys()) def get_profile(self, profile): return self.profiles[profile] def add_profile(self, profile): self.profiles[profile.name] = profile self.save_profile(profile) def remove_profile(self, profile_name, is_admin): profile_path = self._locate_profile_path(profile_name) if self.is_profile_removable(profile_name) or is_admin: shutil.rmtree(profile_path + '/' + profile_name) self._load_all_profiles() else: raise managerException.ManagerException(profile_name + ' profile is stored in ' + profile_path) def is_profile_removable(self, profile_name): # profile is in /etc/profile profile_path = self._locate_profile_path(profile_name) if profile_path == tuned.consts.LOAD_DIRECTORIES[1]: return True else: return False def is_profile_factory(self, profile_name): # profile is in /usr/lib/tuned return not self.is_profile_removable(profile_name) tuned-2.10.0/tuned/gtk/managerException.py000066400000000000000000000022711331721725100204640ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2008-2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Apr 6, 2014 @author: mstana ''' class ManagerException(Exception): """ """ def __init__(self, code): self.code = code def __str__(self): return repr(self.code) def profile_already_exists(self, text=None): if text is None: return repr(self.code) else: return repr(text) tuned-2.10.0/tuned/hardware/000077500000000000000000000000001331721725100156275ustar00rootroot00000000000000tuned-2.10.0/tuned/hardware/__init__.py000066400000000000000000000001321331721725100177340ustar00rootroot00000000000000from .inventory import * from .device_matcher import * from .device_matcher_udev import * tuned-2.10.0/tuned/hardware/device_matcher.py000066400000000000000000000030501331721725100211410ustar00rootroot00000000000000import fnmatch import re __all__ = ["DeviceMatcher"] class DeviceMatcher(object): """ Device name matching against the devices specification in tuning profiles. The devices specification consists of multiple rules separated by spaces. The rules have a syntax of shell-style wildcards and are either positive or negative. The negative rules are prefixed with an exclamation mark. """ def match(self, rules, device_name): """ Match a device against the specification in the profile. If there is no positive rule in the specification, implicit rule which matches all devices is added. The device matches if and only if it matches some positive rule, but no negative rule. """ if isinstance(rules, str): rules = re.split(r"\s|,\s*", rules) positive_rules = [rule for rule in rules if not rule.startswith("!") and not rule.strip() == ''] negative_rules = [rule[1:] for rule in rules if rule not in positive_rules] if len(positive_rules) == 0: positive_rules.append("*") matches = False for rule in positive_rules: if fnmatch.fnmatch(device_name, rule): matches = True break for rule in negative_rules: if fnmatch.fnmatch(device_name, rule): matches = False break return matches def match_list(self, rules, device_list): """ Match a device list against the specification in the profile. Returns the list, which is a subset of devices which match. """ matching_devices = [] for device in device_list: if self.match(rules, device): matching_devices.append(device) return matching_devices tuned-2.10.0/tuned/hardware/device_matcher_udev.py000066400000000000000000000006611331721725100221710ustar00rootroot00000000000000from . import device_matcher import re __all__ = ["DeviceMatcherUdev"] class DeviceMatcherUdev(device_matcher.DeviceMatcher): def match(self, regex, device): """ Match a device against the udev regex in tuning profiles. device is a pyudev.Device object """ properties = '' for key, val in list(device.items()): properties += key + '=' + val + '\n' return re.search(regex, properties, re.MULTILINE) is not None tuned-2.10.0/tuned/hardware/inventory.py000066400000000000000000000071011331721725100202350ustar00rootroot00000000000000import pyudev import tuned.logs from tuned import consts __all__ = ["Inventory"] log = tuned.logs.get() class Inventory(object): """ Inventory object can handle information about available hardware devices. It also informs the plugins about related hardware events. """ def __init__(self, udev_context=None, udev_monitor_cls=None, monitor_observer_factory=None, buffer_size=None): if udev_context is not None: self._udev_context = udev_context else: self._udev_context = pyudev.Context() if udev_monitor_cls is None: udev_monitor_cls = pyudev.Monitor self._udev_monitor = udev_monitor_cls.from_netlink(self._udev_context) if buffer_size is None: buffer_size = consts.CFG_DEF_UDEV_BUFFER_SIZE self._udev_monitor.set_receive_buffer_size(buffer_size) if monitor_observer_factory is None: monitor_observer_factory = _MonitorObserverFactory() self._monitor_observer_factory = monitor_observer_factory self._monitor_observer = None self._subscriptions = {} def get_device(self, subsystem, sys_name): """Get a pyudev.Device object for the sys_name (e.g. 'sda').""" try: return pyudev.Devices.from_name(self._udev_context, subsystem, sys_name) # workaround for pyudev < 0.18 except AttributeError: return pyudev.Device.from_name(self._udev_context, subsystem, sys_name) def get_devices(self, subsystem): """Get list of devices on a given subsystem.""" return self._udev_context.list_devices(subsystem=subsystem) def _remove_unused_filters(self): self._udev_monitor.remove_filter() for subsystem in self._subscriptions: self._udev_monitor.filter_by(subsystem) def _handle_udev_event(self, event, device): if not device.subsystem in self._subscriptions: return for (plugin, callback) in self._subscriptions[device.subsystem]: try: callback(event, device) except Exception as e: log.error("Exception occured in event handler of '%s'." % plugin) log.exception(e) def subscribe(self, plugin, subsystem, callback): """Register handler of device events on a given subsystem.""" log.debug("adding handler: %s (%s)" % (subsystem, plugin)) callback_data = (plugin, callback) if subsystem in self._subscriptions: self._subscriptions[subsystem].append(callback_data) else: self._subscriptions[subsystem] = [callback_data, ] self._udev_monitor.filter_by(subsystem) if self._monitor_observer is None: log.debug("starting monitor observer") self._monitor_observer = self._monitor_observer_factory.create(self._udev_monitor, self._handle_udev_event) self._monitor_observer.start() def _unsubscribe_subsystem(self, plugin, subsystem): for callback_data in self._subscriptions[subsystem]: (_plugin, callback) = callback_data if plugin == _plugin: log.debug("removing handler: %s (%s)" % (subsystem, plugin)) self._subscriptions[subsystem].remove(callback_data) def unsubscribe(self, plugin, subsystem=None): """Unregister handler registered with subscribe method.""" empty_subsystems = [] for _subsystem in self._subscriptions: if subsystem is None or _subsystem == subsystem: self._unsubscribe_subsystem(plugin, _subsystem) if len(self._subscriptions[_subsystem]) == 0: empty_subsystems.append(_subsystem) for _subsystem in empty_subsystems: del self._subscriptions[_subsystem] if len(self._subscriptions) == 0 and self._monitor_observer is not None: log.debug("stopping monitor observer") self._monitor_observer.stop() self._monitor_observer = None class _MonitorObserverFactory(object): def create(self, *args, **kwargs): return pyudev.MonitorObserver(*args, **kwargs) tuned-2.10.0/tuned/logs.py000066400000000000000000000072131331721725100153530ustar00rootroot00000000000000import atexit import logging import logging.handlers import os import os.path import inspect import tuned.consts as consts import random import string import threading try: from StringIO import StringIO except: from io import StringIO __all__ = ["get"] root_logger = None log_handlers = {} log_handlers_lock = threading.Lock() class LogHandler(object): def __init__(self, handler, stream): self.handler = handler self.stream = stream def _random_string(length): r = random.SystemRandom() chars = string.ascii_letters + string.digits res = "" for i in range(length): res += random.choice(chars) return res def log_capture_start(log_level): with log_handlers_lock: for i in range(10): token = _random_string(16) if token not in log_handlers: break else: return None stream = StringIO() handler = logging.StreamHandler(stream) handler.setLevel(log_level) formatter = logging.Formatter( "%(levelname)-8s %(name)s: %(message)s") handler.setFormatter(formatter) root_logger.addHandler(handler) log_handler = LogHandler(handler, stream) log_handlers[token] = log_handler root_logger.debug("Added log handler %s." % token) return token def log_capture_finish(token): with log_handlers_lock: try: log_handler = log_handlers[token] except KeyError: return None content = log_handler.stream.getvalue() log_handler.stream.close() root_logger.removeHandler(log_handler.handler) del log_handlers[token] root_logger.debug("Removed log handler %s." % token) return content def get(): global root_logger if root_logger is None: root_logger = logging.getLogger("tuned") calling_module = inspect.currentframe().f_back name = calling_module.f_locals["__name__"] if name == "__main__": name = "tuned" return root_logger elif name.startswith("tuned."): (root, child) = name.split(".", 1) child_logger = root_logger.getChild(child) child_logger.remove_all_handlers() child_logger.setLevel("NOTSET") return child_logger else: assert False class TunedLogger(logging.getLoggerClass()): """Custom tuned daemon logger class.""" _formatter = logging.Formatter("%(asctime)s %(levelname)-8s %(name)s: %(message)s") _console_handler = None _file_handler = None def __init__(self, *args, **kwargs): super(TunedLogger, self).__init__(*args, **kwargs) self.setLevel(logging.INFO) self.switch_to_console() def console(self, msg, *args, **kwargs): self.log(consts.LOG_LEVEL_CONSOLE, msg, *args, **kwargs) def switch_to_console(self): self._setup_console_handler() self.remove_all_handlers() self.addHandler(self._console_handler) def switch_to_file(self, filename = consts.LOG_FILE): self._setup_file_handler(filename) self.remove_all_handlers() self.addHandler(self._file_handler) def remove_all_handlers(self): _handlers = self.handlers for handler in _handlers: self.removeHandler(handler) @classmethod def _setup_console_handler(cls): if cls._console_handler is not None: return cls._console_handler = logging.StreamHandler() cls._console_handler.setFormatter(cls._formatter) @classmethod def _setup_file_handler(cls, filename): if cls._file_handler is not None: return log_directory = os.path.dirname(filename) if log_directory == '': log_directory = '.' if not os.path.exists(log_directory): os.makedirs(log_directory) cls._file_handler = logging.handlers.RotatingFileHandler( filename, maxBytes = consts.LOG_FILE_MAXBYTES, backupCount = consts.LOG_FILE_COUNT) cls._file_handler.setFormatter(cls._formatter) logging.addLevelName(consts.LOG_LEVEL_CONSOLE, consts.LOG_LEVEL_CONSOLE_NAME) logging.setLoggerClass(TunedLogger) atexit.register(logging.shutdown) tuned-2.10.0/tuned/monitors/000077500000000000000000000000001331721725100157045ustar00rootroot00000000000000tuned-2.10.0/tuned/monitors/__init__.py000066400000000000000000000000561331721725100200160ustar00rootroot00000000000000from .base import * from .repository import * tuned-2.10.0/tuned/monitors/base.py000066400000000000000000000051351331721725100171740ustar00rootroot00000000000000import tuned.logs log = tuned.logs.get() __all__ = ["Monitor"] class Monitor(object): """ Base class for all monitors. Monitors provide data about the running system to Plugin objects, which use the data to tune system parameters. Following methods require reimplementation: - _init_available_devices(cls) - update(cls) """ # class properties @classmethod def _init_class(cls): cls._class_initialized = False cls._instances = set() cls._available_devices = set() cls._updating_devices = set() cls._load = {} cls._init_available_devices() assert isinstance(cls._available_devices, set) cls._class_initialized = True log.debug("available devices: %s" % ", ".join(cls._available_devices)) @classmethod def _init_available_devices(cls): raise NotImplementedError() @classmethod def get_available_devices(cls): return cls._available_devices @classmethod def update(cls): raise NotImplementedError() @classmethod def _register_instance(cls, instance): cls._instances.add(instance) @classmethod def _deregister_instance(cls, instance): cls._instances.remove(instance) @classmethod def _refresh_updating_devices(cls): new_updating = set() for instance in cls._instances: new_updating |= instance.devices cls._updating_devices.clear() cls._updating_devices.update(new_updating) @classmethod def instances(cls): return cls._instances # instance properties def __init__(self, devices = None): if not hasattr(self, "_class_initialized"): self._init_class() assert hasattr(self, "_class_initialized") self._register_instance(self) if devices is not None: self.devices = devices else: self.devices = self.get_available_devices() self.update() def __del__(self): try: self.cleanup() except: pass def cleanup(self): self._deregister_instance(self) self._refresh_updating_devices() @property def devices(self): return self._devices @devices.setter def devices(self, value): new_devices = self._available_devices & set(value) self._devices = new_devices self._refresh_updating_devices() def add_device(self, device): assert isinstance(device, str) if device in self._available_devices: self._devices.add(device) self._updating_devices.add(device) def remove_device(self, device): assert isinstance(device, str) if device in self._devices: self._devices.remove(device) self._updating_devices.remove(device) def get_load(self): return dict([dev_load for dev_load in list(self._load.items()) if dev_load[0] in self._devices]) def get_device_load(self, device): return self._load.get(device, None) tuned-2.10.0/tuned/monitors/monitor_disk.py000066400000000000000000000015661331721725100207670ustar00rootroot00000000000000import tuned.monitors import os class DiskMonitor(tuned.monitors.Monitor): _supported_vendors = ["ATA", "SCSI"] @classmethod def _init_available_devices(cls): block_devices = os.listdir("/sys/block") available = set(filter(cls._is_device_supported, block_devices)) cls._available_devices = available for d in available: cls._load[d] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] @classmethod def _is_device_supported(cls, device): vendor_file = "/sys/block/%s/device/vendor" % device try: vendor = open(vendor_file).read().strip() except IOError: return False return vendor in cls._supported_vendors @classmethod def update(cls): for device in cls._updating_devices: cls._update_disk(device) @classmethod def _update_disk(cls, dev): with open("/sys/block/" + dev + "/stat") as statfile: cls._load[dev] = list(map(int, statfile.read().split())) tuned-2.10.0/tuned/monitors/monitor_load.py000066400000000000000000000004621331721725100207460ustar00rootroot00000000000000import tuned.monitors class LoadMonitor(tuned.monitors.Monitor): @classmethod def _init_available_devices(cls): cls._available_devices = set(["system"]) @classmethod def update(cls): with open("/proc/loadavg") as statfile: data = statfile.read().split() cls._load["system"] = float(data[0]) tuned-2.10.0/tuned/monitors/monitor_net.py000066400000000000000000000022231331721725100206120ustar00rootroot00000000000000import tuned.monitors import os import re from tuned.utils.nettool import ethcard class NetMonitor(tuned.monitors.Monitor): @classmethod def _init_available_devices(cls): available = [] for root, dirs, files in os.walk("/sys/devices"): if root.endswith("/net") and not root.endswith("/virtual/net"): available += dirs cls._available_devices = set(available) for dev in available: #max_speed = cls._calcspeed(ethcard(dev).get_max_speed()) cls._load[dev] = ['0', '0', '0', '0'] @classmethod def _calcspeed(cls, speed): # 0.6 is just a magical constant (empirical value): Typical workload on netcard won't exceed # that and if it does, then the code is smart enough to adapt it. # 1024 * 1024 as for MB -> B # speed / 8 Mb -> MB return (int) (0.6 * 1024 * 1024 * speed / 8) @classmethod def _updateStat(cls, dev): files = ["rx_bytes", "rx_packets", "tx_bytes", "tx_packets"] for i,f in enumerate(files): with open("/sys/class/net/" + dev + "/statistics/" + f) as statfile: cls._load[dev][i] = statfile.read().strip() @classmethod def update(cls): for device in cls._updating_devices: cls._updateStat(device) tuned-2.10.0/tuned/monitors/repository.py000066400000000000000000000014771331721725100205060ustar00rootroot00000000000000import tuned.logs import tuned.monitors from tuned.utils.plugin_loader import PluginLoader log = tuned.logs.get() __all__ = ["Repository"] class Repository(PluginLoader): def __init__(self): super(Repository, self).__init__() self._monitors = set() @property def monitors(self): return self._monitors def _set_loader_parameters(self): self._namespace = "tuned.monitors" self._prefix = "monitor_" self._interface = tuned.monitors.Monitor def create(self, plugin_name, devices): log.debug("creating monitor %s" % plugin_name) monitor_cls = self.load_plugin(plugin_name) monitor_instance = monitor_cls(devices) self._monitors.add(monitor_instance) return monitor_instance def delete(self, monitor): assert isinstance(monitor, self._interface) monitor.cleanup() self._monitors.remove(monitor) tuned-2.10.0/tuned/patterns.py000066400000000000000000000005171331721725100162470ustar00rootroot00000000000000class Singleton(object): """ Singleton design pattern. """ _instance = None def __init__(self): if self.__class__ is Singleton: raise TypeError("Cannot instantiate directly.") @classmethod def get_instance(cls): """Get the class instance.""" if cls._instance is None: cls._instance = cls() return cls._instance tuned-2.10.0/tuned/plugins/000077500000000000000000000000001331721725100155135ustar00rootroot00000000000000tuned-2.10.0/tuned/plugins/__init__.py000066400000000000000000000000611331721725100176210ustar00rootroot00000000000000from .repository import * from . import instance tuned-2.10.0/tuned/plugins/base.py000066400000000000000000000514051331721725100170040ustar00rootroot00000000000000import re import tuned.consts as consts import tuned.profiles.variables import tuned.logs import collections from tuned.utils.commands import commands import os from subprocess import Popen, PIPE log = tuned.logs.get() class Plugin(object): """ Base class for all plugins. Plugins change various system settings in order to get desired performance or power saving. Plugins use Monitor objects to get information from the running system. Intentionally a lot of logic is included in the plugin to increase plugin flexibility. """ def __init__(self, monitors_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, instance_factory, global_cfg, variables): """Plugin constructor.""" self._storage = storage_factory.create(self.__class__.__name__) self._monitors_repository = monitors_repository self._hardware_inventory = hardware_inventory self._device_matcher = device_matcher self._device_matcher_udev = device_matcher_udev self._instance_factory = instance_factory self._instances = collections.OrderedDict() self._init_commands() self._init_devices() self._global_cfg = global_cfg self._variables = variables self._has_dynamic_options = False self._options_used_by_dynamic = self._get_config_options_used_by_dynamic() self._cmd = commands() def cleanup(self): self.destroy_instances() @property def name(self): return self.__class__.__module__.split(".")[-1].split("_", 1)[1] # # Plugin configuration manipulation and helpers. # @classmethod def _get_config_options(self): """Default configuration options for the plugin.""" return {} @classmethod def _get_config_options_used_by_dynamic(self): """List of config options used by dynamic tuning. Their previous values will be automatically saved and restored.""" return [] def _get_effective_options(self, options): """Merge provided options with plugin default options.""" # TODO: _has_dynamic_options is a hack effective = self._get_config_options().copy() for key in options: if key in effective or self._has_dynamic_options: effective[key] = options[key] else: log.warn("Unknown option '%s' for plugin '%s'." % (key, self.__class__.__name__)) return effective def _option_bool(self, value): if type(value) is bool: return value value = str(value).lower() return value == "true" or value == "1" # # Interface for manipulation with instances of the plugin. # def create_instance(self, name, devices_expression, devices_udev_regex, script_pre, script_post, options): """Create new instance of the plugin and seize the devices.""" if name in self._instances: raise Exception("Plugin instance with name '%s' already exists." % name) effective_options = self._get_effective_options(options) instance = self._instance_factory.create(self, name, devices_expression, devices_udev_regex, \ script_pre, script_post, effective_options) self._instances[name] = instance return instance def destroy_instance(self, instance): """Destroy existing instance.""" if instance._plugin != self: raise Exception("Plugin instance '%s' does not belong to this plugin '%s'." % (instance, self)) if instance.name not in self._instances: raise Exception("Plugin instance '%s' was already destroyed." % instance) instance = self._instances[instance.name] self._destroy_instance(instance) del self._instances[instance.name] def initialize_instance(self, instance): """Initialize an instance.""" log.debug("initializing instance %s (%s)" % (instance.name, self.name)) self._instance_init(instance) def destroy_instances(self): """Destroy all instances.""" for instance in list(self._instances.values()): log.debug("destroying instance %s (%s)" % (instance.name, self.name)) self._destroy_instance(instance) self._instances.clear() def _destroy_instance(self, instance): self.release_devices(instance) self._instance_cleanup(instance) def _instance_init(self, instance): raise NotImplementedError() def _instance_cleanup(self, instance): raise NotImplementedError() # # Devices handling # def _init_devices(self): self._devices_supported = False self._assigned_devices = set() self._free_devices = set() def _get_device_objects(self, devices): """Override this in a subclass to transform a list of device names (e.g. ['sda']) to a list of pyudev.Device objects, if your plugin supports it""" return None def _get_matching_devices(self, instance, devices): if instance.devices_udev_regex is None: return set(self._device_matcher.match_list(instance.devices_expression, devices)) else: udev_devices = self._get_device_objects(devices) if udev_devices is None: log.error("Plugin '%s' does not support the 'devices_udev_regex' option", self.name) return set() udev_devices = self._device_matcher_udev.match_list(instance.devices_udev_regex, udev_devices) return set([x.sys_name for x in udev_devices]) def assign_free_devices(self, instance): if not self._devices_supported: return log.debug("assigning devices to instance %s" % instance.name) to_assign = self._get_matching_devices(instance, self._free_devices) instance.active = len(to_assign) > 0 if not instance.active: log.warn("instance %s: no matching devices available" % instance.name) else: name = instance.name if instance.name != self.name: name += " (%s)" % self.name log.info("instance %s: assigning devices %s" % (name, ", ".join(to_assign))) instance.devices.update(to_assign) # cannot use |= self._assigned_devices |= to_assign self._free_devices -= to_assign def release_devices(self, instance): if not self._devices_supported: return to_release = instance.devices & self._assigned_devices instance.active = False instance.devices.clear() self._assigned_devices -= to_release self._free_devices |= to_release # # Tuning activation and deactivation. # def _run_for_each_device(self, instance, callback): if self._devices_supported: devices = instance.devices else: devices = [None, ] for device in devices: callback(instance, device) def _instance_pre_static(self, instance, enabling): pass def _instance_post_static(self, instance, enabling): pass def _call_device_script(self, instance, script, op, devices, full_rollback = False): if script is None: return None if len(devices) == 0: log.warn("Instance '%s': no device to call script '%s' for." % (instance.name, script)) return None if not script.startswith("/"): log.error("Relative paths cannot be used in script_pre or script_post. " \ + "Use ${i:PROFILE_DIR}.") return False dir_name = os.path.dirname(script) ret = True for dev in devices: environ = os.environ environ.update(self._variables.get_env()) arguments = [op] if full_rollback: arguments.append("full_rollback") arguments.append(dev) log.info("calling script '%s' with arguments '%s'" % (script, str(arguments))) log.debug("using environment '%s'" % str(list(environ.items()))) try: proc = Popen([script] + arguments, \ stdout=PIPE, stderr=PIPE, \ close_fds=True, env=environ, \ cwd = dir_name, universal_newlines = True) out, err = proc.communicate() if proc.returncode: log.error("script '%s' error: %d, '%s'" % (script, proc.returncode, err[:-1])) ret = False except (OSError,IOError) as e: log.error("script '%s' error: %s" % (script, e)) ret = False return ret def instance_apply_tuning(self, instance): """ Apply static and dynamic tuning if the plugin instance is active. """ if not instance.active: return if instance.has_static_tuning: self._call_device_script(instance, instance.script_pre, "apply", instance.devices) self._instance_pre_static(instance, True) self._instance_apply_static(instance) self._instance_post_static(instance, True) self._call_device_script(instance, instance.script_post, "apply", instance.devices) if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._run_for_each_device(instance, self._instance_apply_dynamic) def instance_verify_tuning(self, instance, ignore_missing): """ Verify static tuning if the plugin instance is active. """ if not instance.active: return None if instance.has_static_tuning: if self._call_device_script(instance, instance.script_pre, "verify", instance.devices) == False: return False if self._instance_verify_static(instance, ignore_missing) == False: return False if self._call_device_script(instance, instance.script_post, "verify", instance.devices) == False: return False return True else: return None def instance_update_tuning(self, instance): """ Apply dynamic tuning if the plugin instance is active. """ if not instance.active: return if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._run_for_each_device(instance, self._instance_update_dynamic) def instance_unapply_tuning(self, instance, full_rollback = False): """ Remove all tunings applied by the plugin instance. """ if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._run_for_each_device(instance, self._instance_unapply_dynamic) if instance.has_static_tuning: self._call_device_script(instance, instance.script_post, "unapply", instance.devices, full_rollback = full_rollback) self._instance_pre_static(instance, False) self._instance_unapply_static(instance, full_rollback) self._instance_post_static(instance, False) self._call_device_script(instance, instance.script_pre, "unapply", instance.devices, full_rollback = full_rollback) def _instance_apply_static(self, instance): self._execute_all_non_device_commands(instance) self._execute_all_device_commands(instance, instance.devices) def _instance_verify_static(self, instance, ignore_missing): ret = True if self._verify_all_non_device_commands(instance, ignore_missing) == False: ret = False if self._verify_all_device_commands(instance, instance.devices, ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): self._cleanup_all_device_commands(instance, instance.devices) self._cleanup_all_non_device_commands(instance) def _instance_apply_dynamic(self, instance, device): for option in [opt for opt in self._options_used_by_dynamic if self._storage_get(instance, self._commands[opt], device) is None]: self._check_and_save_value(instance, self._commands[option], device) self._instance_update_dynamic(instance, device) def _instance_unapply_dynamic(self, instance, device): raise NotImplementedError() def _instance_update_dynamic(self, instance, device): raise NotImplementedError() # # Registration of commands for static plugins. # def _init_commands(self): """ Initialize commands. """ self._commands = collections.OrderedDict() self._autoregister_commands() self._check_commands() def _autoregister_commands(self): """ Register all commands marked using @command_set, @command_get, and @command_custom decorators. """ for member_name in self.__class__.__dict__: if member_name.startswith("__"): continue member = getattr(self, member_name) if not hasattr(member, "_command"): continue command_name = member._command["name"] info = self._commands.get(command_name, {"name": command_name}) if "set" in member._command: info["custom"] = None info["set"] = member info["per_device"] = member._command["per_device"] info["priority"] = member._command["priority"] elif "get" in member._command: info["get"] = member elif "custom" in member._command: info["custom"] = member info["per_device"] = member._command["per_device"] info["priority"] = member._command["priority"] self._commands[command_name] = info # sort commands by priority self._commands = collections.OrderedDict(sorted(iter(self._commands.items()), key=lambda name_info: name_info[1]["priority"])) def _check_commands(self): """ Check if all commands are defined correctly. """ for command_name, command in list(self._commands.items()): # do not check custom commands if command.get("custom", False): continue # automatic commands should have 'get' and 'set' functions if "get" not in command or "set" not in command: raise TypeError("Plugin command '%s' is not defined correctly" % command_name) # # Operations with persistent storage for status data. # def _storage_key(self, instance_name = None, command_name = None, device_name = None): class_name = type(self).__name__ instance_name = "" if instance_name is None else instance_name command_name = "" if command_name is None else command_name device_name = "" if device_name is None else device_name return "%s/%s/%s/%s" % (class_name, instance_name, command_name, device_name) def _storage_set(self, instance, command, value, device_name=None): key = self._storage_key(instance.name, command["name"], device_name) self._storage.set(key, value) def _storage_get(self, instance, command, device_name=None): key = self._storage_key(instance.name, command["name"], device_name) return self._storage.get(key) def _storage_unset(self, instance, command, device_name=None): key = self._storage_key(instance.name, command["name"], device_name) return self._storage.unset(key) # # Command execution, verification, and cleanup. # def _execute_all_non_device_commands(self, instance): for command in [command for command in list(self._commands.values()) if not command["per_device"]]: new_value = self._variables.expand(instance.options.get(command["name"], None)) if new_value is not None: self._execute_non_device_command(instance, command, new_value) def _execute_all_device_commands(self, instance, devices): for command in [command for command in list(self._commands.values()) if command["per_device"]]: new_value = self._variables.expand(instance.options.get(command["name"], None)) if new_value is None: continue for device in devices: self._execute_device_command(instance, command, device, new_value) def _verify_all_non_device_commands(self, instance, ignore_missing): ret = True for command in [command for command in list(self._commands.values()) if not command["per_device"]]: new_value = self._variables.expand(instance.options.get(command["name"], None)) if new_value is not None: if self._verify_non_device_command(instance, command, new_value, ignore_missing) == False: ret = False return ret def _verify_all_device_commands(self, instance, devices, ignore_missing): ret = True for command in [command for command in list(self._commands.values()) if command["per_device"]]: new_value = instance.options.get(command["name"], None) if new_value is None: continue for device in devices: if self._verify_device_command(instance, command, device, new_value, ignore_missing) == False: ret = False return ret def _process_assignment_modifiers(self, new_value, current_value): if new_value is not None: nws = str(new_value) if len(nws) <= 1: return new_value op = nws[:1] val = nws[1:] if current_value is None: return val if op in ["<", ">"] else new_value try: if op == ">": if int(val) > int(current_value): return val else: return None elif op == "<": if int(val) < int(current_value): return val else: return None except ValueError: log.warn("cannot compare new value '%s' with current value '%s' by operator '%s', using '%s' directly as new value" % (val, current_value, op, new_value)) return new_value def _get_current_value(self, command, device = None, ignore_missing=False): if device is not None: return command["get"](device, ignore_missing=ignore_missing) else: return command["get"]() def _check_and_save_value(self, instance, command, device = None, new_value = None): current_value = self._get_current_value(command, device) new_value = self._process_assignment_modifiers(new_value, current_value) if new_value is not None and current_value is not None: self._storage_set(instance, command, current_value, device) return new_value def _execute_device_command(self, instance, command, device, new_value): if command["custom"] is not None: command["custom"](True, new_value, device, False, False) else: new_value = self._check_and_save_value(instance, command, device, new_value) if new_value is not None: command["set"](new_value, device, sim = False) def _execute_non_device_command(self, instance, command, new_value): if command["custom"] is not None: command["custom"](True, new_value, False, False) else: new_value = self._check_and_save_value(instance, command, None, new_value) if new_value is not None: command["set"](new_value, sim = False) def _norm_value(self, value): v = self._cmd.unquote(str(value)) if re.match(r'\s*(0+,?)+([\da-fA-F]*,?)*\s*$', v): return re.sub(r'^\s*(0+,?)+', "", v) return v def _verify_value(self, name, new_value, current_value, ignore_missing, device = None): if new_value is None: return None ret = False if current_value is None and ignore_missing: if device is None: log.info(consts.STR_VERIFY_PROFILE_VALUE_MISSING % name) else: log.info(consts.STR_VERIFY_PROFILE_DEVICE_VALUE_MISSING % (device, name)) return True if current_value is not None: current_value = self._norm_value(current_value) new_value = self._norm_value(new_value) try: ret = int(new_value) == int(current_value) except ValueError: try: ret = int(new_value, 16) == int(current_value, 16) except ValueError: ret = str(new_value) == str(current_value) if not ret: vals = str(new_value).split('|') for val in vals: val = val.strip() ret = val == current_value if ret: break if ret: if device is None: log.info(consts.STR_VERIFY_PROFILE_VALUE_OK % (name, str(current_value).strip())) else: log.info(consts.STR_VERIFY_PROFILE_DEVICE_VALUE_OK % (device, name, str(current_value).strip())) return True else: if device is None: log.error(consts.STR_VERIFY_PROFILE_VALUE_FAIL % (name, str(current_value).strip(), str(new_value).strip())) else: log.error(consts.STR_VERIFY_PROFILE_DEVICE_VALUE_FAIL % (device, name, str(current_value).strip(), str(new_value).strip())) return False def _verify_device_command(self, instance, command, device, new_value, ignore_missing): if command["custom"] is not None: return command["custom"](True, new_value, device, True, ignore_missing) current_value = self._get_current_value(command, device, ignore_missing=ignore_missing) new_value = self._process_assignment_modifiers(new_value, current_value) if new_value is None: return None new_value = command["set"](new_value, device, True) return self._verify_value(command["name"], new_value, current_value, ignore_missing, device) def _verify_non_device_command(self, instance, command, new_value, ignore_missing): if command["custom"] is not None: return command["custom"](True, new_value, True, ignore_missing) current_value = self._get_current_value(command) new_value = self._process_assignment_modifiers(new_value, current_value) if new_value is None: return None new_value = command["set"](new_value, True) return self._verify_value(command["name"], new_value, current_value, ignore_missing) def _cleanup_all_non_device_commands(self, instance): for command in reversed([command for command in list(self._commands.values()) if not command["per_device"]]): if (instance.options.get(command["name"], None) is not None) or (command["name"] in self._options_used_by_dynamic): self._cleanup_non_device_command(instance, command) def _cleanup_all_device_commands(self, instance, devices): for command in reversed([command for command in list(self._commands.values()) if command["per_device"]]): if (instance.options.get(command["name"], None) is not None) or (command["name"] in self._options_used_by_dynamic): for device in devices: self._cleanup_device_command(instance, command, device) def _cleanup_device_command(self, instance, command, device): if command["custom"] is not None: command["custom"](False, None, device, False, False) else: old_value = self._storage_get(instance, command, device) if old_value is not None: command["set"](old_value, device, sim = False) self._storage_unset(instance, command, device) def _cleanup_non_device_command(self, instance, command): if command["custom"] is not None: command["custom"](False, None, False, False) else: old_value = self._storage_get(instance, command) if old_value is not None: command["set"](old_value, sim = False) self._storage_unset(instance, command) tuned-2.10.0/tuned/plugins/decorators.py000066400000000000000000000017271331721725100202410ustar00rootroot00000000000000__all__ = ["command_set", "command_get", "command_custom"] # @command_set("scheduler", per_device=True) # def set_scheduler(self, value, device): # set_new_scheduler # # @command_get("scheduler") # def get_scheduler(self, device): # return current_scheduler # # @command_set("foo") # def set_foo(self, value): # set_new_foo # # @command_get("foo") # def get_foo(self): # return current_foo # def command_set(name, per_device=False, priority=0): def wrapper(method): method._command = { "set": True, "name": name, "per_device": per_device, "priority": priority, } return method return wrapper def command_get(name): def wrapper(method): method._command = { "get": True, "name": name, } return method return wrapper def command_custom(name, per_device=False, priority=0): def wrapper(method): method._command = { "custom": True, "name": name, "per_device": per_device, "priority": priority, } return method return wrapper tuned-2.10.0/tuned/plugins/exceptions.py000066400000000000000000000001431331721725100202440ustar00rootroot00000000000000import tuned.exceptions class NotSupportedPluginException(tuned.exceptions.TunedException): pass tuned-2.10.0/tuned/plugins/hotplug.py000066400000000000000000000054171331721725100175560ustar00rootroot00000000000000from . import base import tuned.consts as consts import tuned.logs log = tuned.logs.get() class Plugin(base.Plugin): """ Base class for plugins with device hotpluging support. """ def __init__(self, *args, **kwargs): super(Plugin, self).__init__(*args, **kwargs) self._hardware_events_init() def cleanup(self): super(Plugin, self).cleanup() self._hardware_events_cleanup() def _hardware_events_init(self): raise NotImplementedError() def _hardware_events_cleanup(self): raise NotImplementedError() def _hardware_events_callback(self, event, device): if event == "add": log.info("device '%s' added" % device.sys_name) self._add_device(device) elif event == "remove": log.info("device '%s' removed" % device.sys_name) self._remove_device(device) def _add_device(self, device): device_name = device.sys_name if device_name in (self._assigned_devices | self._free_devices): return for instance_name, instance in list(self._instances.items()): if len(self._get_matching_devices(instance, [device_name])) == 1: log.info("instance %s: adding new device %s" % (instance_name, device_name)) self._assigned_devices.add(device_name) instance.devices.add(device_name) self._call_device_script(instance, instance.script_pre, "apply", [device_name]) self._added_device_apply_tuning(instance, device_name) self._call_device_script(instance, instance.script_post, "apply", [device_name]) break else: log.debug("no instance wants %s" % device_name) self._free_devices.add(device_name) def _remove_device(self, device): device_name = device.sys_name if device_name not in (self._assigned_devices | self._free_devices): return for instance in list(self._instances.values()): if device_name in instance.devices: self._call_device_script(instance, instance.script_post, "unapply", [device_name]) self._removed_device_unapply_tuning(instance, device_name) self._call_device_script(instance, instance.script_pre, "unapply", [device_name]) instance.devices.remove(device_name) instance.active = len(instance.devices) > 0 self._assigned_devices.remove(device_name) break else: self._free_devices.remove(device_name) def _added_device_apply_tuning(self, instance, device_name): self._execute_all_device_commands(instance, [device_name]) if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._instance_apply_dynamic(instance, device_name) def _removed_device_unapply_tuning(self, instance, device_name): if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._instance_unapply_dynamic(instance, device_name) self._cleanup_all_device_commands(instance, [device_name]) tuned-2.10.0/tuned/plugins/instance/000077500000000000000000000000001331721725100173175ustar00rootroot00000000000000tuned-2.10.0/tuned/plugins/instance/__init__.py000066400000000000000000000000741331721725100214310ustar00rootroot00000000000000from .instance import Instance from .factory import Factory tuned-2.10.0/tuned/plugins/instance/factory.py000066400000000000000000000002241331721725100213360ustar00rootroot00000000000000from .instance import Instance class Factory(object): def create(self, *args, **kwargs): instance = Instance(*args, **kwargs) return instance tuned-2.10.0/tuned/plugins/instance/instance.py000066400000000000000000000034031331721725100214750ustar00rootroot00000000000000class Instance(object): """ """ def __init__(self, plugin, name, devices_expression, devices_udev_regex, script_pre, script_post, options): self._plugin = plugin self._name = name self._devices_expression = devices_expression self._devices_udev_regex = devices_udev_regex self._script_pre = script_pre self._script_post = script_post self._options = options self._active = True self._has_static_tuning = False self._has_dynamic_tuning = False self._devices = set() # properties @property def plugin(self): return self._plugin @property def name(self): return self._name @property def active(self): """The instance performs some tuning (otherwise it is suspended).""" return self._active @active.setter def active(self, value): self._active = value @property def devices_expression(self): return self._devices_expression @property def devices(self): return self._devices @property def devices_udev_regex(self): return self._devices_udev_regex @property def script_pre(self): return self._script_pre @property def script_post(self): return self._script_post @property def options(self): return self._options @property def has_static_tuning(self): return self._has_static_tuning @property def has_dynamic_tuning(self): return self._has_dynamic_tuning # methods def apply_tuning(self): self._plugin.instance_apply_tuning(self) def verify_tuning(self, ignore_missing): return self._plugin.instance_verify_tuning(self, ignore_missing) def update_tuning(self): self._plugin.instance_update_tuning(self) def unapply_tuning(self, full_rollback = False): self._plugin.instance_unapply_tuning(self, full_rollback) def destroy(self): self.unapply_tuning() self._plugin.destroy_instance(self) tuned-2.10.0/tuned/plugins/plugin_audio.py000066400000000000000000000047341331721725100205540ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import os import struct import glob log = tuned.logs.get() cmd = commands() class AudioPlugin(base.Plugin): """ Plugin for tuning audio cards powersaving options. Power management is supported per module, not device. From this reason, we take kernel module names as device names. """ def _init_devices(self): self._devices_supported = True self._assigned_devices = set() self._free_devices = set() for device in self._hardware_inventory.get_devices("sound").match_sys_name("card*"): module_name = self._device_module_name(device) if module_name in ["snd_hda_intel", "snd_ac97_codec"]: self._free_devices.add(module_name) def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass def _device_module_name(self, device): try: return device.parent.driver except: return None @classmethod def _get_config_options(cls): return { "timeout": 0, "reset_controller": False, } def _timeout_path(self, device): return "/sys/module/%s/parameters/power_save" % device def _reset_controller_path(self, device): return "/sys/module/%s/parameters/power_save_controller" % device @command_set("timeout", per_device = True) def _set_timeout(self, value, device, sim): try: timeout = int(value) except ValueError: log.error("timeout value '%s' is not integer" % value) return None if timeout >= 0: sys_file = self._timeout_path(device) if not sim: cmd.write_to_file(sys_file, "%d" % timeout) return timeout else: return None @command_get("timeout") def _get_timeout(self, device, ignore_missing=False): sys_file = self._timeout_path(device) value = cmd.read_file(sys_file, no_error=ignore_missing) if len(value) > 0: return value return None @command_set("reset_controller", per_device = True) def _set_reset_controller(self, value, device, sim): v = cmd.get_bool(value) sys_file = self._reset_controller_path(device) if os.path.exists(sys_file): if not sim: cmd.write_to_file(sys_file, v) return v return None @command_get("reset_controller") def _get_reset_controller(self, device, ignore_missing=False): sys_file = self._reset_controller_path(device) if os.path.exists(sys_file): value = cmd.read_file(sys_file) if len(value) > 0: return cmd.get_bool(value) return None tuned-2.10.0/tuned/plugins/plugin_bootloader.py000066400000000000000000000271741331721725100216100ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from . import exceptions from tuned.utils.commands import commands import tuned.consts as consts import os import re import tempfile log = tuned.logs.get() class BootloaderPlugin(base.Plugin): """ Plugin for tuning bootloader options. Currently only grub2 is supported and reboot is required to apply the tunings. These tunings are unloaded only on profile change followed by reboot. """ def __init__(self, *args, **kwargs): if not os.path.isfile(consts.GRUB2_TUNED_TEMPLATE_PATH): raise exceptions.NotSupportedPluginException("Required GRUB2 template not found, disabling plugin.") super(BootloaderPlugin, self).__init__(*args, **kwargs) self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True # controls grub2_cfg rewrites in _instance_post_static self.update_grub2_cfg = False self._initrd_remove_dir = False self._initrd_dst_img_val = None self._cmdline_val = "" self._initrd_val = "" self._grub2_cfg_file_names = self._get_grub2_cfg_files() def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "grub2_cfg_file": None, "initrd_dst_img": None, "initrd_add_img": None, "initrd_add_dir": None, "initrd_remove_dir": None, "cmdline": None, } def _get_effective_options(self, options): """Merge provided options with plugin default options and merge all cmdline.* options.""" effective = self._get_config_options().copy() cmdline_keys = [] for key in options: if str(key).startswith("cmdline"): cmdline_keys.append(key) elif key in effective: effective[key] = options[key] else: log.warn("Unknown option '%s' for plugin '%s'." % (key, self.__class__.__name__)) cmdline_keys.sort() cmdline = "" for key in cmdline_keys: val = options[key] if val is None or val == "": continue op = val[0] vals = val[1:].strip() if op == "+" and vals != "": cmdline += " " + vals elif op == "-" and vals != "": for p in vals.split(): regex = re.escape(p) cmdline = re.sub(r"(\A|\s)" + regex + r"(?=\Z|\s)", r"", cmdline) else: cmdline += " " + val cmdline = cmdline.strip() if cmdline != "": effective["cmdline"] = cmdline return effective def _get_grub2_cfg_files(self): cfg_files = [] for f in consts.GRUB2_CFG_FILES: if os.path.exists(f): cfg_files.append(f) return cfg_files def _patch_bootcmdline(self, d): return self._cmd.add_modify_option_in_file(consts.BOOT_CMDLINE_FILE, d) def _remove_grub2_tuning(self): if not self._grub2_cfg_file_names: log.error("cannot find grub.cfg to patch, you need to regenerate it by hand using grub2-mkconfig") return self._patch_bootcmdline({consts.BOOT_CMDLINE_TUNED_VAR : "", consts.BOOT_CMDLINE_INITRD_ADD_VAR : ""}) for f in self._grub2_cfg_file_names: self._cmd.add_modify_option_in_file(f, {"set\s+" + consts.GRUB2_TUNED_VAR : "", "set\s+" + consts.GRUB2_TUNED_INITRD_VAR : ""}, add = False) if self._initrd_dst_img_val is not None: log.info("removing initrd image '%s'" % self._initrd_dst_img_val) self._cmd.unlink(self._initrd_dst_img_val) def _instance_unapply_static(self, instance, full_rollback = False): if full_rollback: log.info("removing grub2 tuning previously added by Tuned") self._remove_grub2_tuning() def _grub2_cfg_unpatch(self, grub2_cfg): log.debug("unpatching grub.cfg") cfg = re.sub(r"^\s*set\s+" + consts.GRUB2_TUNED_VAR + "\s*=.*\n", "", grub2_cfg, flags = re.MULTILINE) grub2_cfg = re.sub(r" *\$" + consts.GRUB2_TUNED_VAR, "", cfg, flags = re.MULTILINE) cfg = re.sub(r"^\s*set\s+" + consts.GRUB2_TUNED_INITRD_VAR + "\s*=.*\n", "", grub2_cfg, flags = re.MULTILINE) grub2_cfg = re.sub(r" *\$" + consts.GRUB2_TUNED_INITRD_VAR, "", cfg, flags = re.MULTILINE) cfg = re.sub(consts.GRUB2_TEMPLATE_HEADER_BEGIN + r"\n", "", grub2_cfg, flags = re.MULTILINE) return re.sub(consts.GRUB2_TEMPLATE_HEADER_END + r"\n+", "", cfg, flags = re.MULTILINE) def _grub2_cfg_patch_initial(self, grub2_cfg, d): log.debug("initial patching of grub.cfg") s = r"\1\n\n" + consts.GRUB2_TEMPLATE_HEADER_BEGIN + "\n" for opt in d: s += r"set " + self._cmd.escape(opt) + "=\"" + self._cmd.escape(d[opt]) + "\"\n" s += consts.GRUB2_TEMPLATE_HEADER_END + r"\n" grub2_cfg = re.sub(r"^(\s*###\s+END\s+[^#]+/00_header\s+### *)\n", s, grub2_cfg, flags = re.MULTILINE) d2 = {"linux" : consts.GRUB2_TUNED_VAR, "initrd" : consts.GRUB2_TUNED_INITRD_VAR} for i in d2: # add tuned parameters to all kernels grub2_cfg = re.sub(r"^(\s*" + i + r"(16|efi)?\s+.*)$", r"\1 $" + d2[i], grub2_cfg, flags = re.MULTILINE) # remove tuned parameters from rescue kernels grub2_cfg = re.sub(r"^(\s*" + i + r"(?:16|efi)?\s+\S+rescue.*)\$" + d2[i] + r" *(.*)$", r"\1\2", grub2_cfg, flags = re.MULTILINE) # fix whitespaces in rescue kernels grub2_cfg = re.sub(r"^(\s*" + i + r"(?:16|efi)?\s+\S+rescue.*) +$", r"\1", grub2_cfg, flags = re.MULTILINE) return grub2_cfg def _grub2_default_env_patch(self): grub2_default_env = self._cmd.read_file(consts.GRUB2_DEFAULT_ENV_FILE) if len(grub2_default_env) <= 0: log.error("error reading '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) return False d = {"GRUB_CMDLINE_LINUX_DEFAULT" : consts.GRUB2_TUNED_VAR, "GRUB_INITRD_OVERLAY" : consts.GRUB2_TUNED_INITRD_VAR} write = False for i in d: if re.search(r"^[^#]*\b" + i + r"\s*=.*\\\$" + d[i] + r"\b.*$", grub2_default_env, flags = re.MULTILINE) is None: write = True if grub2_default_env[-1] != "\n": grub2_default_env += "\n" grub2_default_env += i + "=\"${" + i + ":+$" + i + r" }\$" + d[i] + "\"\n" if write: log.debug("patching '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) self._cmd.write_to_file(consts.GRUB2_DEFAULT_ENV_FILE, grub2_default_env) return True def _grub2_cfg_patch(self, d): log.debug("patching grub.cfg") if not self._grub2_cfg_file_names: log.error("cannot find grub.cfg to patch, you need to regenerate it by hand by grub2-mkconfig") return False for f in self._grub2_cfg_file_names: grub2_cfg = self._cmd.read_file(f) if len(grub2_cfg) <= 0: log.error("error patching %s, you need to regenerate it by hand by grub2-mkconfig" % f) return False log.debug("adding boot command line parameters to '%s'" % f) grub2_cfg_new = grub2_cfg patch_initial = False for opt in d: (grub2_cfg_new, nsubs) = re.subn(r"\b(set\s+" + opt + "\s*=).*$", r"\1" + "\"" + d[opt] + "\"", grub2_cfg_new, flags = re.MULTILINE) if nsubs < 1 or re.search(r"\$" + opt, grub2_cfg, flags = re.MULTILINE) is None: patch_initial = True # workaround for rhbz#1442117 if len(re.findall(r"\$" + consts.GRUB2_TUNED_VAR, grub2_cfg, flags = re.MULTILINE)) != \ len(re.findall(r"\$" + consts.GRUB2_TUNED_INITRD_VAR, grub2_cfg, flags = re.MULTILINE)): patch_initial = True if patch_initial: grub2_cfg_new = self._grub2_cfg_patch_initial(self._grub2_cfg_unpatch(grub2_cfg), d) self._cmd.write_to_file(f, grub2_cfg_new) self._grub2_default_env_patch() return True def _grub2_update(self): self._grub2_cfg_patch({consts.GRUB2_TUNED_VAR : self._cmdline_val, consts.GRUB2_TUNED_INITRD_VAR : self._initrd_val}) self._patch_bootcmdline({consts.BOOT_CMDLINE_TUNED_VAR : self._cmdline_val, consts.BOOT_CMDLINE_INITRD_ADD_VAR : self._initrd_val}) def _init_initrd_dst_img(self, name): if self._initrd_dst_img_val is None: self._initrd_dst_img_val = os.path.join(consts.BOOT_DIR, os.path.basename(name)) def _check_petitboot(self): return os.path.isdir(consts.PETITBOOT_DETECT_DIR) def _install_initrd(self, img): if self._check_petitboot(): log.warn("Detected Petitboot which doesn't support initrd overlays. The initrd overlay will be ignored by bootloader.") log.info("installing initrd image as '%s'" % self._initrd_dst_img_val) img_name = os.path.basename(self._initrd_dst_img_val) if not self._cmd.copy(img, self._initrd_dst_img_val): return False self.update_grub2_cfg = True curr_cmdline = self._cmd.read_file("/proc/cmdline").rstrip() initrd_grubpath = "/" lc = len(curr_cmdline) if lc: path = re.sub(r"^\s*BOOT_IMAGE=\s*(\S*/).*$", "\\1", curr_cmdline) if len(path) < lc: initrd_grubpath = path self._initrd_val = os.path.join(initrd_grubpath, img_name) return True @command_custom("grub2_cfg_file") def _grub2_cfg_file(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: self._grub2_cfg_file_names = [str(value)] @command_custom("initrd_dst_img") def _initrd_dst_img(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: self._initrd_dst_img_val = str(value) if self._initrd_dst_img_val == "": return False if self._initrd_dst_img_val[0] != "/": self._initrd_dst_img_val = os.path.join(consts.BOOT_DIR, self._initrd_dst_img_val) @command_custom("initrd_remove_dir") def _initrd_remove_dir(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: self._initrd_remove_dir = self._cmd.get_bool(value) == "1" @command_custom("initrd_add_img", per_device = False, priority = 10) def _initrd_add_img(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: src_img = str(value) self._init_initrd_dst_img(src_img) if src_img == "": return False if not self._install_initrd(src_img): return False @command_custom("initrd_add_dir", per_device = False, priority = 10) def _initrd_add_dir(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: src_dir = str(value) self._init_initrd_dst_img(src_dir) if src_dir == "": return False if not os.path.isdir(src_dir): log.error("error: cannot create initrd image, source directory '%s' doesn't exist" % src_dir) return False log.info("generating initrd image from directory '%s'" % src_dir) (fd, tmpfile) = tempfile.mkstemp(prefix = "tuned-bootloader-", suffix = ".tmp") log.debug("writing initrd image to temporary file '%s'" % tmpfile) os.close(fd) (rc, out) = self._cmd.execute("find . | cpio -co > %s" % tmpfile, cwd = src_dir, shell = True) log.debug("cpio log: %s" % out) if rc != 0: log.error("error generating initrd image") self._cmd.unlink(tmpfile, no_error = True) return False self._install_initrd(tmpfile) self._cmd.unlink(tmpfile) if self._initrd_remove_dir: log.info("removing directory '%s'" % src_dir) self._cmd.rmtree(src_dir) @command_custom("cmdline", per_device = False, priority = 10) def _cmdline(self, enabling, value, verify, ignore_missing): v = self._variables.expand(self._cmd.unquote(value)) if verify: cmdline = self._cmd.read_file("/proc/cmdline") if len(cmdline) == 0: return None cmdline_set = set(cmdline.split()) value_set = set(v.split()) cmdline_intersect = cmdline_set.intersection(value_set) if cmdline_intersect == value_set: log.info(consts.STR_VERIFY_PROFILE_VALUE_OK % ("cmdline", str(value_set))) return True else: log.error(consts.STR_VERIFY_PROFILE_VALUE_FAIL % ("cmdline", str(cmdline_intersect), str(value_set))) return False if enabling and value is not None: log.info("installing additional boot command line parameters to grub2") self.update_grub2_cfg = True self._cmdline_val = v def _instance_post_static(self, instance, enabling): if enabling and self.update_grub2_cfg: self._grub2_update() self.update_grub2_cfg = False tuned-2.10.0/tuned/plugins/plugin_cpu.py000066400000000000000000000270031331721725100202340ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import tuned.consts as consts import os import struct import errno log = tuned.logs.get() # TODO: force_latency -> command # intel_pstate class CPULatencyPlugin(base.Plugin): """ Plugin for tuning CPU options. Powersaving, governor, required latency, etc. """ def __init__(self, *args, **kwargs): super(CPULatencyPlugin, self).__init__(*args, **kwargs) self._has_pm_qos = True self._has_energy_perf_bias = True self._has_intel_pstate = False self._min_perf_pct_save = None self._max_perf_pct_save = None self._no_turbo_save = None self._governors_map = {} self._cmd = commands() def _init_devices(self): self._devices_supported = True self._free_devices = set() # current list of devices for device in self._hardware_inventory.get_devices("cpu"): self._free_devices.add(device.sys_name) self._assigned_devices = set() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("cpu", x) for x in devices] @classmethod def _get_config_options(self): return { "load_threshold" : 0.2, "latency_low" : 100, "latency_high" : 1000, "force_latency" : None, "governor" : None, "sampling_down_factor" : None, "energy_perf_bias" : None, "min_perf_pct" : None, "max_perf_pct" : None, "no_turbo" : None, } def _check_energy_perf_bias(self): self._has_energy_perf_bias = False retcode_unsupported = 1 retcode = self._cmd.execute(["x86_energy_perf_policy", "-r"], no_errors = [errno.ENOENT, retcode_unsupported])[0] if retcode == 0: self._has_energy_perf_bias = True elif retcode < 0: log.warning("unable to run x86_energy_perf_policy tool, ignoring CPU energy performance bias, is the tool installed?") else: log.warning("your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias") def _check_intel_pstate(self): self._has_intel_pstate = os.path.exists("/sys/devices/system/cpu/intel_pstate") if self._has_intel_pstate: log.info("intel_pstate detected") def _is_cpu_online(self, device): sd = str(device) return self._cmd.is_cpu_online(str(device).replace("cpu", "")) def _cpu_has_scaling_governor(self, device): return os.path.exists("/sys/devices/system/cpu/%s/cpufreq/scaling_governor" % device) def _check_cpu_can_change_governor(self, device): if not self._is_cpu_online(device): log.debug("'%s' is not online, skipping" % device) return False if not self._cpu_has_scaling_governor(device): log.debug("there is no scaling governor fo '%s', skipping" % device) return False return True def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False # only the first instance of the plugin can control the latency if list(self._instances.values())[0] == instance: instance._first_instance = True try: self._cpu_latency_fd = os.open(consts.PATH_CPU_DMA_LATENCY, os.O_WRONLY) except OSError: log.error("Unable to open '%s', disabling PM_QoS control" % consts.PATH_CPU_DMA_LATENCY) self._has_pm_qos = False self._latency = None if instance.options["force_latency"] is None: instance._load_monitor = self._monitors_repository.create("load", None) instance._has_dynamic_tuning = True else: instance._load_monitor = None # Check for x86_energy_perf_policy, ignore if not available / supported self._check_energy_perf_bias() # Check for intel_pstate self._check_intel_pstate() else: instance._first_instance = False log.info("Latency settings from non-first CPU plugin instance '%s' will be ignored." % instance.name) instance._first_device = list(instance.devices)[0] def _instance_cleanup(self, instance): if instance._first_instance: if self._has_pm_qos: os.close(self._cpu_latency_fd) if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) def _get_intel_pstate_attr(self, attr): return self._cmd.read_file("/sys/devices/system/cpu/intel_pstate/%s" % attr, None).strip() def _set_intel_pstate_attr(self, attr, val): if val is not None: self._cmd.write_to_file("/sys/devices/system/cpu/intel_pstate/%s" % attr, val) def _getset_intel_pstate_attr(self, attr, value): if value is None: return None v = self._get_intel_pstate_attr(attr) self._set_intel_pstate_attr(attr, value) return v def _instance_apply_static(self, instance): super(CPULatencyPlugin, self)._instance_apply_static(instance) if not instance._first_instance: return force_latency_value = instance.options["force_latency"] if force_latency_value is not None: self._set_latency(force_latency_value) if self._has_intel_pstate: self._min_perf_pct_save = self._getset_intel_pstate_attr("min_perf_pct", instance.options["min_perf_pct"]) self._max_perf_pct_save = self._getset_intel_pstate_attr("max_perf_pct", instance.options["max_perf_pct"]) self._no_turbo_save = self._getset_intel_pstate_attr("no_turbo", instance.options["no_turbo"]) def _instance_unapply_static(self, instance, full_rollback = False): super(CPULatencyPlugin, self)._instance_unapply_static(instance, full_rollback) if instance._first_instance and self._has_intel_pstate: self._set_intel_pstate_attr("min_perf_pct", self._min_perf_pct_save) self._set_intel_pstate_attr("max_perf_pct", self._max_perf_pct_save) self._set_intel_pstate_attr("no_turbo", self._no_turbo_save) def _instance_apply_dynamic(self, instance, device): self._instance_update_dynamic(instance, device) def _instance_update_dynamic(self, instance, device): assert(instance._first_instance) if device != instance._first_device: return load = instance._load_monitor.get_load()["system"] if load < instance.options["load_threshold"]: self._set_latency(instance.options["latency_high"]) else: self._set_latency(instance.options["latency_low"]) def _instance_unapply_dynamic(self, instance, device): pass def _set_latency(self, latency): latency = int(latency) if self._has_pm_qos and self._latency != latency: log.info("setting new cpu latency %d" % latency) latency_bin = struct.pack("i", latency) os.write(self._cpu_latency_fd, latency_bin) self._latency = latency def _get_available_governors(self, device): return self._cmd.read_file("/sys/devices/system/cpu/%s/cpufreq/scaling_available_governors" % device).strip().split() @command_set("governor", per_device=True) def _set_governor(self, governor, device, sim): if not self._check_cpu_can_change_governor(device): return None if governor not in self._get_available_governors(device): if not sim: log.info("ignoring governor '%s' on cpu '%s', it is not supported" % (governor, device)) return None if not sim: log.info("setting governor '%s' on cpu '%s'" % (governor, device)) self._cmd.write_to_file("/sys/devices/system/cpu/%s/cpufreq/scaling_governor" % device, str(governor)) return str(governor) @command_get("governor") def _get_governor(self, device, ignore_missing=False): governor = None if not self._check_cpu_can_change_governor(device): return None data = self._cmd.read_file("/sys/devices/system/cpu/%s/cpufreq/scaling_governor" % device, no_error=ignore_missing).strip() if len(data) > 0: governor = data if governor is None: log.error("could not get current governor on cpu '%s'" % device) return governor def _sampling_down_factor_path(self, governor = "ondemand"): return "/sys/devices/system/cpu/cpufreq/%s/sampling_down_factor" % governor @command_set("sampling_down_factor", per_device = True, priority = 10) def _set_sampling_down_factor(self, sampling_down_factor, device, sim): val = None # hack to clear governors map when the profile starts unloading # TODO: this should be handled better way, by e.g. currently non-implemented # Plugin.profile_load_finished() method if device in self._governors_map: self._governors_map.clear() self._governors_map[device] = None governor = self._get_governor(device) if governor is None: log.debug("ignoring sampling_down_factor setting for CPU '%s', cannot match governor" % device) return None if governor not in list(self._governors_map.values()): self._governors_map[device] = governor path = self._sampling_down_factor_path(governor) if not os.path.exists(path): log.debug("ignoring sampling_down_factor setting for CPU '%s', governor '%s' doesn't support it" % (device, governor)) return None val = str(sampling_down_factor) if not sim: log.info("setting sampling_down_factor to '%s' for governor '%s'" % (val, governor)) self._cmd.write_to_file(path, val) return val @command_get("sampling_down_factor") def _get_sampling_down_factor(self, device, ignore_missing=False): governor = self._get_governor(device, ignore_missing=ignore_missing) if governor is None: return None path = self._sampling_down_factor_path(governor) if not os.path.exists(path): return None return self._cmd.read_file(path).strip() def _try_set_energy_perf_bias(self, cpu_id, value): (retcode, out, err_msg) = self._cmd.execute( ["x86_energy_perf_policy", "-c", cpu_id, str(value) ], return_err = True) return (retcode, err_msg) @command_set("energy_perf_bias", per_device=True) def _set_energy_perf_bias(self, energy_perf_bias, device, sim): if not self._is_cpu_online(device): log.debug("%s is not online, skipping" % device) return None if self._has_energy_perf_bias: if not sim: cpu_id = device.lstrip("cpu") vals = energy_perf_bias.split('|') for val in vals: val = val.strip() log.debug("Trying to set energy_perf_bias to '%s' on cpu '%s'" % (val, device)) (retcode, err_msg) = self._try_set_energy_perf_bias( cpu_id, val) if retcode == 0: log.info("energy_perf_bias successfully set to '%s' on cpu '%s'" % (val, device)) break elif retcode < 0: log.error("Failed to set energy_perf_bias: %s" % err_msg) break else: log.debug("Could not set energy_perf_bias to '%s' on cpu '%s', trying another value" % (val, device)) else: log.error("Failed to set energy_perf_bias on cpu '%s'. Is the value in the profile correct?" % device) return str(energy_perf_bias) else: return None def _try_parse_num(self, s): try: v = int(s) except ValueError as e: try: v = int(s, 16) except ValueError as e: v = s return v # Before Linux 4.13 def _energy_perf_policy_to_human(self, s): return {0:"performance", 6:"normal", 15:"powersave"}.get(self._try_parse_num(s), s) # Since Linux 4.13 def _energy_perf_policy_to_human_v2(self, s): return {0:"performance", 4:"balance-performance", 6:"normal", 8:"balance-power", 15:"power", }.get(self._try_parse_num(s), s) @command_get("energy_perf_bias") def _get_energy_perf_bias(self, device, ignore_missing=False): energy_perf_bias = None if not self._is_cpu_online(device): log.debug("%s is not online, skipping" % device) return None if self._has_energy_perf_bias: cpu_id = device.lstrip("cpu") retcode, lines = self._cmd.execute(["x86_energy_perf_policy", "-c", cpu_id, "-r"]) if retcode == 0: for line in lines.splitlines(): l = line.split() if len(l) == 2: energy_perf_bias = self._energy_perf_policy_to_human(l[1]) break elif len(l) == 3: energy_perf_bias = self._energy_perf_policy_to_human_v2(l[2]) break return energy_perf_bias tuned-2.10.0/tuned/plugins/plugin_disk.py000066400000000000000000000273701331721725100204060ustar00rootroot00000000000000import errno from . import hotplug from .decorators import * import tuned.logs import tuned.consts as consts from tuned.utils.commands import commands import os import re log = tuned.logs.get() class DiskPlugin(hotplug.Plugin): """ Plugin for tuning options of disks. """ def __init__(self, *args, **kwargs): super(DiskPlugin, self).__init__(*args, **kwargs) self._power_levels = [254, 225, 195, 165, 145, 125, 105, 85, 70, 55, 30, 20] self._spindown_levels = [0, 250, 230, 210, 190, 170, 150, 130, 110, 90, 70, 60] self._levels = len(self._power_levels) self._level_steps = 6 self._load_smallest = 0.01 self._cmd = commands() def _init_devices(self): self._devices_supported = True self._free_devices = set() for device in self._hardware_inventory.get_devices("block"): if self._device_is_supported(device): self._free_devices.add(device.sys_name) self._assigned_devices = set() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("block", x) for x in devices] @classmethod def _device_is_supported(cls, device): return device.device_type == "disk" and \ device.attributes.get("removable", None) == "0" and \ (device.parent is None or \ device.parent.subsystem in ["scsi", "virtio", "xen"]) def _hardware_events_init(self): self._hardware_inventory.subscribe(self, "block", self._hardware_events_callback) def _hardware_events_cleanup(self): self._hardware_inventory.unsubscribe(self) def _hardware_events_callback(self, event, device): if self._device_is_supported(device): super(DiskPlugin, self)._hardware_events_callback(event, device) def _added_device_apply_tuning(self, instance, device_name): if instance._load_monitor is not None: instance._load_monitor.add_device(device_name) super(DiskPlugin, self)._added_device_apply_tuning(instance, device_name) def _removed_device_unapply_tuning(self, instance, device_name): if instance._load_monitor is not None: instance._load_monitor.remove_device(device_name) super(DiskPlugin, self)._removed_device_unapply_tuning(instance, device_name) @classmethod def _get_config_options(cls): return { "dynamic" : True, # FIXME: do we want this default? "elevator" : None, "apm" : None, "spindown" : None, "readahead" : None, "readahead_multiply" : None, "scheduler_quantum" : None, } @classmethod def _get_config_options_used_by_dynamic(cls): return [ "apm", "spindown", ] def _instance_init(self, instance): instance._has_static_tuning = True self._apm_errcnt = 0 self._spindown_errcnt = 0 if self._option_bool(instance.options["dynamic"]): instance._has_dynamic_tuning = True instance._load_monitor = self._monitors_repository.create("disk", instance.devices) instance._device_idle = {} instance._stats = {} instance._idle = {} instance._spindown_change_delayed = {} else: instance._has_dynamic_tuning = False instance._load_monitor = None def _instance_cleanup(self, instance): if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) instance._load_monitor = None def _update_errcnt(self, rc, spindown): if spindown: s = "spindown" cnt = self._spindown_errcnt else: s = "apm" cnt = self._apm_errcnt if cnt >= consts.ERROR_THRESHOLD: return if rc == 0: cnt = 0 elif rc == -errno.ENOENT: self._spindown_errcnt = self._apm_errcnt = consts.ERROR_THRESHOLD + 1 log.warn("hdparm command not found, ignoring future set_apm / set_spindown commands") return else: cnt += 1 if cnt == consts.ERROR_THRESHOLD: log.info("disabling set_%s command: too many consecutive errors" % s) if spindown: self._spindown_errcnt = cnt else: self._apm_errcnt = cnt def _change_spindown(self, instance, device, new_spindown_level): log.debug("changing spindown to %d" % new_spindown_level) (rc, out) = self._cmd.execute(["hdparm", "-S%d" % new_spindown_level, "/dev/%s" % device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, True) instance._spindown_change_delayed[device] = False def _drive_spinning(self, device): (rc, out) = self._cmd.execute(["hdparm", "-C", "/dev/%s" % device], no_errors = [errno.ENOENT]) return not "standby" in out and not "sleeping" in out def _instance_update_dynamic(self, instance, device): load = instance._load_monitor.get_device_load(device) if load is None: return if not device in instance._stats: self._init_stats_and_idle(instance, device) self._update_stats(instance, device, load) self._update_idle(instance, device) stats = instance._stats[device] idle = instance._idle[device] # level change decision if idle["level"] + 1 < self._levels and idle["read"] >= self._level_steps and idle["write"] >= self._level_steps: level_change = 1 elif idle["level"] > 0 and (idle["read"] == 0 or idle["write"] == 0): level_change = -1 else: level_change = 0 # change level if decided if level_change != 0: idle["level"] += level_change new_power_level = self._power_levels[idle["level"]] new_spindown_level = self._spindown_levels[idle["level"]] log.debug("tuning level changed to %d" % idle["level"]) if self._spindown_errcnt < consts.ERROR_THRESHOLD: if not self._drive_spinning(device) and level_change > 0: log.debug("delaying spindown change to %d, drive has already spun down" % new_spindown_level) instance._spindown_change_delayed[device] = True else: self._change_spindown(instance, device, new_spindown_level) if self._apm_errcnt < consts.ERROR_THRESHOLD: log.debug("changing APM_level to %d" % new_power_level) (rc, out) = self._cmd.execute(["hdparm", "-B%d" % new_power_level, "/dev/%s" % device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, False) elif instance._spindown_change_delayed[device] and self._drive_spinning(device): new_spindown_level = self._spindown_levels[idle["level"]] self._change_spindown(instance, device, new_spindown_level) log.debug("%s load: read %0.2f, write %0.2f" % (device, stats["read"], stats["write"])) log.debug("%s idle: read %d, write %d, level %d" % (device, idle["read"], idle["write"], idle["level"])) def _init_stats_and_idle(self, instance, device): instance._stats[device] = { "new": 11 * [0], "old": 11 * [0], "max": 11 * [1] } instance._idle[device] = { "level": 0, "read": 0, "write": 0 } instance._spindown_change_delayed[device] = False def _update_stats(self, instance, device, new_load): instance._stats[device]["old"] = old_load = instance._stats[device]["new"] instance._stats[device]["new"] = new_load # load difference diff = [new_old[0] - new_old[1] for new_old in zip(new_load, old_load)] instance._stats[device]["diff"] = diff # adapt maximum expected load if the difference is higer old_max_load = instance._stats[device]["max"] max_load = [max(pair) for pair in zip(old_max_load, diff)] instance._stats[device]["max"] = max_load # read/write ratio instance._stats[device]["read"] = float(diff[1]) / float(max_load[1]) instance._stats[device]["write"] = float(diff[5]) / float(max_load[5]) def _update_idle(self, instance, device): # increase counter if there is no load, otherwise reset the counter for operation in ["read", "write"]: if instance._stats[device][operation] < self._load_smallest: instance._idle[device][operation] += 1 else: instance._idle[device][operation] = 0 def _instance_unapply_dynamic(self, instance, device): pass def _sysfs_path(self, device, suffix, prefix = "/sys/block/"): if "/" in device: dev = os.path.join(prefix, device.replace("/", "!"), suffix) if os.path.exists(dev): return dev return os.path.join(prefix, device, suffix) def _elevator_file(self, device): return self._sysfs_path(device, "queue/scheduler") @command_set("elevator", per_device=True) def _set_elevator(self, value, device, sim): sys_file = self._elevator_file(device) if not sim: self._cmd.write_to_file(sys_file, value) return value @command_get("elevator") def _get_elevator(self, device, ignore_missing=False): sys_file = self._elevator_file(device) # example of scheduler file content: # noop deadline [cfq] return self._cmd.get_active_option(self._cmd.read_file(sys_file, no_error=ignore_missing)) @command_set("apm", per_device=True) def _set_apm(self, value, device, sim): if self._apm_errcnt < consts.ERROR_THRESHOLD: if not sim: (rc, out) = self._cmd.execute(["hdparm", "-B", str(value), "/dev/" + device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, False) return str(value) else: return None @command_get("apm") def _get_apm(self, device, ignore_missing=False): value = None err = False (rc, out) = self._cmd.execute(["hdparm", "-B", "/dev/" + device], no_errors = [errno.ENOENT]) if rc == -errno.ENOENT: return None elif rc != 0: err = True else: m = re.match(r".*=\s*(\d+).*", out, re.S) if m: try: value = int(m.group(1)) except ValueError: err = True if err: log.error("could not get current APM settings for device '%s'" % device) return value @command_set("spindown", per_device=True) def _set_spindown(self, value, device, sim): if self._spindown_errcnt < consts.ERROR_THRESHOLD: if not sim: (rc, out) = self._cmd.execute(["hdparm", "-S", str(value), "/dev/" + device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, True) return str(value) else: return None @command_get("spindown") def _get_spindown(self, device, ignore_missing=False): # There's no way how to get current/old spindown value, hardcoding vendor specific 253 return 253 def _readahead_file(self, device): return self._sysfs_path(device, "queue/read_ahead_kb") def _parse_ra(self, value): val = str(value).split(None, 1) v = int(val[0]) if len(val) > 1 and val[1][0] == "s": # v *= 512 / 1024 v /= 2 return v @command_set("readahead", per_device=True) def _set_readahead(self, value, device, sim): sys_file = self._readahead_file(device) val = self._parse_ra(value) if not sim: self._cmd.write_to_file(sys_file, "%d" % val) return val @command_get("readahead") def _get_readahead(self, device, ignore_missing=False): sys_file = self._readahead_file(device) value = self._cmd.read_file(sys_file, no_error=ignore_missing).strip() if len(value) == 0: return None return int(value) @command_custom("readahead_multiply", per_device=True) def _multiply_readahead(self, enabling, multiplier, device, verify, ignore_missing): if verify: return None storage_key = self._storage_key( command_name = "readahead_multiply", device_name = device) if enabling: old_readahead = self._get_readahead(device) if old_readahead is None: return new_readahead = int(float(multiplier) * old_readahead) self._storage.set(storage_key, old_readahead) self._set_readahead(new_readahead, device, False) else: old_readahead = self._storage.get(storage_key) if old_readahead is None: return self._set_readahead(old_readahead, device, False) self._storage.unset(storage_key) def _scheduler_quantum_file(self, device): return self._sysfs_path(device, "queue/iosched/quantum") @command_set("scheduler_quantum", per_device=True) def _set_scheduler_quantum(self, value, device, sim): sys_file = self._scheduler_quantum_file(device) if not sim: self._cmd.write_to_file(sys_file, "%d" % int(value)) return value @command_get("scheduler_quantum") def _get_scheduler_quantum(self, device, ignore_missing=False): sys_file = self._scheduler_quantum_file(device) value = self._cmd.read_file(sys_file, no_error=ignore_missing).strip() if len(value) == 0: if not ignore_missing: log.info("disk_scheduler_quantum option is not supported by this HW") return None return int(value) tuned-2.10.0/tuned/plugins/plugin_eeepc_she.py000066400000000000000000000040551331721725100213670ustar00rootroot00000000000000from . import base from . import exceptions import tuned.logs from tuned.utils.commands import commands import os log = tuned.logs.get() class EeePCSHEPlugin(base.Plugin): """ Plugin for tuning FSB (front side bus) speed on Asus EEE PCs with SHE (Super Hybrid Engine) support. """ def __init__(self, *args, **kwargs): self._cmd = commands() self._control_file = "/sys/devices/platform/eeepc/cpufv" if not os.path.isfile(self._control_file): self._control_file = "/sys/devices/platform/eeepc-wmi/cpufv" if not os.path.isfile(self._control_file): raise exceptions.NotSupportedPluginException("Plugin is not supported on your hardware.") super(EeePCSHEPlugin, self).__init__(*args, **kwargs) @classmethod def _get_config_options(self): return { "load_threshold_normal" : 0.6, "load_threshold_powersave" : 0.4, "she_powersave" : 2, "she_normal" : 1, } def _instance_init(self, instance): instance._has_static_tuning = False instance._has_dynamic_tuning = True instance._she_mode = None instance._load_monitor = self._monitors_repository.create("load", None) def _instance_cleanup(self, instance): if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) instance._load_monitor = None def _instance_update_dynamic(self, instance, device): load = instance._load_monitor.get_load()["system"] if load <= instance.options["load_threshold_powersave"]: self._set_she_mode(instance, "powersave") elif load >= instance.options["load_threshold_normal"]: self._set_she_mode(instance, "normal") def _instance_unapply_dynamic(self, instance, device): # FIXME: restore previous value self._set_she_mode(instance, "normal") def _set_she_mode(self, instance, new_mode): new_mode_numeric = int(instance.options["she_%s" % new_mode]) if instance._she_mode != new_mode_numeric: log.info("new eeepc_she mode %s (%d) " % (new_mode, new_mode_numeric)) self._cmd.write_to_file(self._control_file, "%s" % new_mode_numeric) self._she_mode = new_mode_numeric tuned-2.10.0/tuned/plugins/plugin_modules.py000066400000000000000000000074541331721725100211250ustar00rootroot00000000000000import re import os.path from . import base from .decorators import * import tuned.logs from subprocess import * from tuned.utils.commands import commands import tuned.consts as consts log = tuned.logs.get() class ModulesPlugin(base.Plugin): """ Plugin for applying custom kernel modules options. """ def __init__(self, *args, **kwargs): super(ModulesPlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True instance._modules = instance.options def _instance_cleanup(self, instance): pass def _reload_modules(self, modules): for module in modules: retcode, out = self._cmd.execute(["modprobe", "-r", module]) if retcode < 0: log.warn("'modprobe' command not found, cannot reload kernel modules, reboot is required") return elif retcode > 0: log.debug("cannot remove kernel module '%s': %s" % (module, out.strip())) retcode, out = self._cmd.execute(["modprobe", module]) if retcode != 0: log.warn("cannot insert/reinsert module '%s', reboot is required: %s" % (module, out.strip())) def _instance_apply_static(self, instance): self._clear_modprobe_file() s = "" retcode = 0 skip_check = False reload_list = [] for option, value in list(instance._modules.items()): module = self._variables.expand(option) v = self._variables.expand(value) if not skip_check: retcode, out = self._cmd.execute(["modinfo", module]) if retcode < 0: skip_check = True log.warn("'modinfo' command not found, not checking kernel modules") elif retcode > 0: log.error("kernel module '%s' not found, skipping it" % module) if skip_check or retcode == 0: if len(v) > 1 and v[0:2] == "+r": v = re.sub(r"^\s*\+r\s*,?\s*", "", v) reload_list.append(module) if len(v) > 0: s += "options " + module + " " + v + "\n" else: log.debug("module '%s' doesn't have any option specified, not writing it to modprobe.d" % module) self._cmd.write_to_file(consts.MODULES_FILE, s) l = len(reload_list) if l > 0: self._reload_modules(reload_list) if len(instance._modules) != l: log.info(consts.STR_HINT_REBOOT) def _unquote_path(self, path): return str(path).replace("/", "") def _instance_verify_static(self, instance, ignore_missing): ret = True # not all modules exports all their parameteters through sysfs, so hardcode check with ignore_missing ignore_missing = True r = re.compile(r"\s+") for option, value in list(instance._modules.items()): module = self._variables.expand(option) v = self._variables.expand(value) v = re.sub(r"^\s*\+r\s*,?\s*", "", v) mpath = "/sys/module/%s" % module if not os.path.exists(mpath): ret = False log.error(consts.STR_VERIFY_PROFILE_FAIL % "module '%s' is not loaded" % module) else: log.info(consts.STR_VERIFY_PROFILE_OK % "module '%s' is loaded" % module) l = r.split(v) for item in l: arg = item.split("=") if len(arg) != 2: log.warn("unrecognized module option for module '%s': %s" % (module, item)) else: if self._verify_value(arg[0], arg[1], self._cmd.read_file(mpath + "/parameters/" + self._unquote_path(arg[0]), err_ret = None, no_error = True), ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): if full_rollback: self._clear_modprobe_file() def _clear_modprobe_file(self): s = self._cmd.read_file(consts.MODULES_FILE, no_error = True) l = s.split("\n") i = j = 0 ll = len(l) r = re.compile(r"^\s*#") while i < ll: if r.search(l[i]) is None: j = i i = ll i += 1 s = "\n".join(l[0:j]) if len(s) > 0: s += "\n" self._cmd.write_to_file(consts.MODULES_FILE, s) tuned-2.10.0/tuned/plugins/plugin_mounts.py000066400000000000000000000122521331721725100207720ustar00rootroot00000000000000import tuned.consts as consts from . import base from .decorators import * from subprocess import Popen,PIPE import tuned.logs from tuned.utils.commands import commands import glob log = tuned.logs.get() cmd = commands() class MountsPlugin(base.Plugin): """ Plugin for tuning options of mount-points. """ @classmethod def _generate_mountpoint_topology(cls): """ Gets the information about disks, partitions and mountpoints. Stores information about used filesystem and creates a list of all underlying devices (in case of LVM) for each mountpoint. """ mountpoint_topology = {} current_disk = None stdout, stderr = Popen(["lsblk", "-rno", \ "TYPE,RM,KNAME,FSTYPE,MOUNTPOINT"], \ stdout=PIPE, stderr=PIPE, close_fds=True, \ universal_newlines = True).communicate() for columns in [line.split() for line in stdout.splitlines()]: if len(columns) < 3: continue device_type, device_removable, device_name = columns[:3] filesystem = columns[3] if len(columns) > 3 else None mountpoint = columns[4] if len(columns) > 4 else None if device_type == "disk": current_disk = device_name continue # skip removable, skip nonpartitions if device_removable == "1" or device_type not in ["part", "lvm"]: continue if mountpoint is None or mountpoint == "[SWAP]": continue mountpoint_topology.setdefault(mountpoint, {"disks": set(), "device_name": device_name, "filesystem": filesystem}) mountpoint_topology[mountpoint]["disks"].add(current_disk) cls._mountpoint_topology = mountpoint_topology def _init_devices(self): self._generate_mountpoint_topology() self._devices_supported = True self._free_devices = set(self._mountpoint_topology.keys()) self._assigned_devices = set() @classmethod def _get_config_options(self): return { "disable_barriers": None, } def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass def _get_device_cache_type(self, device): """ Get device cache type. This will work only for devices on SCSI kernel subsystem. """ source_filenames = glob.glob("/sys/block/%s/device/scsi_disk/*/cache_type" % device) for source_filename in source_filenames: return cmd.read_file(source_filename).strip() return None def _mountpoint_has_writeback_cache(self, mountpoint): """ Checks if the device has 'write back' cache. If the cache type cannot be determined, asume some other cache. """ for device in self._mountpoint_topology[mountpoint]["disks"]: if self._get_device_cache_type(device) == "write back": return True return False def _mountpoint_has_barriers(self, mountpoint): """ Checks if a given mountpoint is mounted with barriers enabled or disabled. """ with open("/proc/mounts") as mounts_file: for line in mounts_file: # device mountpoint filesystem options dump check columns = line.split() if columns[0][0] != "/": continue if columns[1] == mountpoint: option_list = columns[3] break else: return None options = option_list.split(",") for option in options: (name, sep, value) = option.partition("=") # nobarrier barrier=0 if name == "nobarrier" or (name == "barrier" and value == "0"): return False # barrier barrier=1 elif name == "barrier": return True else: # default return True def _remount_partition(self, partition, options): """ Remounts partition. """ remount_command = ["/usr/bin/mount", partition, "-o", "remount,%s" % options] cmd.execute(remount_command) @command_custom("disable_barriers", per_device=True) def _disable_barriers(self, start, value, mountpoint, verify, ignore_missing): storage_key = self._storage_key( command_name = "disable_barriers", device_name = mountpoint) force = str(value).lower() == "force" value = force or self._option_bool(value) if start: if not value: return None reject_reason = None if not self._mountpoint_topology[mountpoint]["filesystem"].startswith("ext"): reject_reason = "filesystem not supported" elif not force and self._mountpoint_has_writeback_cache(mountpoint): reject_reason = "device uses write back cache" else: original_value = self._mountpoint_has_barriers(mountpoint) if original_value is None: reject_reason = "unknown current setting" elif original_value == False: if verify: log.info(consts.STR_VERIFY_PROFILE_OK % mountpoint) return True else: reject_reason = "barriers already disabled" elif verify: log.error(consts.STR_VERIFY_PROFILE_FAIL % mountpoint) return False if reject_reason is not None: log.info("not disabling barriers on '%s' (%s)" % (mountpoint, reject_reason)) return None self._storage.set(storage_key, original_value) log.info("disabling barriers on '%s'" % mountpoint) self._remount_partition(mountpoint, "barrier=0") else: if verify: return None original_value = self._storage.get(storage_key) if original_value is None: return None log.info("enabling barriers on '%s'" % mountpoint) self._remount_partition(mountpoint, "barrier=1") self._storage.unset(storage_key) return None tuned-2.10.0/tuned/plugins/plugin_net.py000066400000000000000000000303731331721725100202370ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from tuned.utils.nettool import ethcard from tuned.utils.commands import commands import os import re log = tuned.logs.get() WOL_VALUES = "pumbagsd" class NetTuningPlugin(base.Plugin): """ Plugin for ethernet card options tuning. """ def __init__(self, *args, **kwargs): super(NetTuningPlugin, self).__init__(*args, **kwargs) self._load_smallest = 0.05 self._level_steps = 6 self._cmd = commands() def _init_devices(self): self._devices_supported = True self._free_devices = set() self._assigned_devices = set() re_not_virtual = re.compile('(?!.*/virtual/.*)') for device in self._hardware_inventory.get_devices("net"): if re_not_virtual.match(device.device_path): self._free_devices.add(device.sys_name) log.debug("devices: %s" % str(self._free_devices)); def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("net", x) for x in devices] def _instance_init(self, instance): instance._has_static_tuning = True if self._option_bool(instance.options["dynamic"]): instance._has_dynamic_tuning = True instance._load_monitor = self._monitors_repository.create("net", instance.devices) instance._idle = {} instance._stats = {} else: instance._has_dynamic_tuning = False instance._load_monitor = None instance._idle = None instance._stats = None def _instance_cleanup(self, instance): if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) instance._load_monitor = None def _instance_apply_dynamic(self, instance, device): self._instance_update_dynamic(instance, device) def _instance_update_dynamic(self, instance, device): load = [int(value) for value in instance._load_monitor.get_device_load(device)] if load is None: return if not device in instance._stats: self._init_stats_and_idle(instance, device) self._update_stats(instance, device, load) self._update_idle(instance, device) stats = instance._stats[device] idle = instance._idle[device] if idle["level"] == 0 and idle["read"] >= self._level_steps and idle["write"] >= self._level_steps: idle["level"] = 1 log.info("%s: setting 100Mbps" % device) ethcard(device).set_speed(100) elif idle["level"] == 1 and (idle["read"] == 0 or idle["write"] == 0): idle["level"] = 0 log.info("%s: setting max speed" % device) ethcard(device).set_max_speed() log.debug("%s load: read %0.2f, write %0.2f" % (device, stats["read"], stats["write"])) log.debug("%s idle: read %d, write %d, level %d" % (device, idle["read"], idle["write"], idle["level"])) @classmethod def _get_config_options_coalesce(cls): return { "adaptive-rx": None, "adaptive-tx": None, "rx-usecs": None, "rx-frames": None, "rx-usecs-irq": None, "rx-frames-irq": None, "tx-usecs": None, "tx-frames": None, "tx-usecs-irq": None, "tx-frames-irq": None, "stats-block-usecs": None, "pkt-rate-low": None, "rx-usecs-low": None, "rx-frames-low": None, "tx-usecs-low": None, "tx-frames-low": None, "pkt-rate-high": None, "rx-usecs-high": None, "rx-frames-high": None, "tx-usecs-high": None, "tx-frames-high": None, "sample-interval": None } @classmethod def _get_config_options_pause(cls): return { "autoneg": None, "rx": None, "tx": None } @classmethod def _get_config_options_ring(cls): return { "rx": None, "rx-mini": None, "rx-jumbo": None, "tx": None } @classmethod def _get_config_options(cls): return { "dynamic": True, "wake_on_lan": None, "nf_conntrack_hashsize": None, "features": None, "coalesce": None, "pause": None, "ring": None, } def _init_stats_and_idle(self, instance, device): max_speed = self._calc_speed(ethcard(device).get_max_speed()) instance._stats[device] = { "new": 4 * [0], "max": 2 * [max_speed, 1] } instance._idle[device] = { "level": 0, "read": 0, "write": 0 } def _update_stats(self, instance, device, new_load): # put new to old instance._stats[device]["old"] = old_load = instance._stats[device]["new"] instance._stats[device]["new"] = new_load # load difference diff = [new_old[0] - new_old[1] for new_old in zip(new_load, old_load)] instance._stats[device]["diff"] = diff # adapt maximum expected load if the difference is higer old_max_load = instance._stats[device]["max"] max_load = [max(pair) for pair in zip(old_max_load, diff)] instance._stats[device]["max"] = max_load # read/write ratio instance._stats[device]["read"] = float(diff[0]) / float(max_load[0]) instance._stats[device]["write"] = float(diff[2]) / float(max_load[2]) def _update_idle(self, instance, device): # increase counter if there is no load, otherwise reset the counter for operation in ["read", "write"]: if instance._stats[device][operation] < self._load_smallest: instance._idle[device][operation] += 1 else: instance._idle[device][operation] = 0 def _instance_unapply_dynamic(self, instance, device): if device in instance._idle and instance._idle[device]["level"] > 0: instance._idle[device]["level"] = 0 log.info("%s: setting max speed" % device) ethcard(device).set_max_speed() def _calc_speed(self, speed): # 0.6 is just a magical constant (empirical value): Typical workload on netcard won't exceed # that and if it does, then the code is smart enough to adapt it. # 1024 * 1024 as for MB -> B # speed / 7 Mb -> MB return (int) (0.6 * 1024 * 1024 * speed / 8) # parse features/coalesce config parameters (those defined in profile configuration) # context is for error message def _parse_config_parameters(self, value, context): # split supporting various dellimeters v = str(re.sub(r"(:\s*)|(\s+)|(\s*;\s*)|(\s*,\s*)", " ", value)).split() lv = len(v) if lv % 2 != 0: log.error("invalid %s parameter: '%s'" % (context, str(value))) return None if lv == 0: return dict() # convert flat list to dict return dict(list(zip(v[::2], v[1::2]))) # parse features/coalesce device parameters (those returned by ethtool) def _parse_device_parameters(self, value): # substitute "Adaptive RX: val1 TX: val2" to 'adaptive-rx: val1' and # 'adaptive-tx: val2' and workaround for ethtool inconsistencies # (rhbz#1225375) value = self._cmd.multiple_re_replace(\ {"Adaptive RX:": "adaptive-rx:", \ "\s+TX:": "\nadaptive-tx:", \ "rx-frame-low:": "rx-frames-low:", \ "rx-frame-high:": "rx-frames-high:", \ "tx-frame-low:": "tx-frames-low:", \ "tx-frame-high:": "tx-frames-high:", "large-receive-offload:": "lro:"}, value) # remove empty lines, remove fixed parameters (those with "[fixed]") vl = [v for v in value.split('\n') if len(str(v)) > 0 and not re.search("\[fixed\]$", str(v))] if len(vl) < 2: return None # skip first line (device name), split to key/value, # remove pairs which are not key/value return dict([u for u in [re.split(r":\s*", str(v)) for v in vl[1:]] if len(u) == 2]) @classmethod def _nf_conntrack_hashsize_path(self): return "/sys/module/nf_conntrack/parameters/hashsize" @command_set("wake_on_lan", per_device=True) def _set_wake_on_lan(self, value, device, sim): if value is None: return None # see man ethtool for possible wol values, 0 added as an alias for 'd' value = re.sub(r"0", "d", str(value)); if not re.match(r"^[" + WOL_VALUES + r"]+$", value): log.warn("Incorrect 'wake_on_lan' value.") return None if not sim: self._cmd.execute(["ethtool", "-s", device, "wol", value]) return value @command_get("wake_on_lan") def _get_wake_on_lan(self, device, ignore_missing=False): value = None try: m = re.match(r".*Wake-on:\s*([" + WOL_VALUES + "]+).*", self._cmd.execute(["ethtool", device])[1], re.S) if m: value = m.group(1) except IOError: pass return value @command_set("nf_conntrack_hashsize") def _set_nf_conntrack_hashsize(self, value, sim): if value is None: return None hashsize = int(value) if hashsize >= 0: if not sim: self._cmd.write_to_file(self._nf_conntrack_hashsize_path(), hashsize) return hashsize else: return None @command_get("nf_conntrack_hashsize") def _get_nf_conntrack_hashsize(self): value = self._cmd.read_file(self._nf_conntrack_hashsize_path()) if len(value) > 0: return int(value) return None # d is dict: {parameter: value} def _check_parameters(self, context, d): if context == "features": return True params = set(d.keys()) supported_getter = { "coalesce": self._get_config_options_coalesce, \ "pause": self._get_config_options_pause, \ "ring": self._get_config_options_ring } supported = set(supported_getter[context]().keys()) if not params.issubset(supported): log.error("unknown %s parameter(s): %s" % (context, str(params - supported))) return False return True # parse output of ethtool -a def _parse_pause_parameters(self, s): s = self._cmd.multiple_re_replace(\ {"Autonegotiate": "autoneg", "RX": "rx", "TX": "tx"}, s) l = s.split("\n")[1:] l = [x for x in l if x != '' and not re.search(r"\[fixed\]", x)] return dict([x for x in [re.split(r":\s*", x) for x in l] if len(x) == 2]) # parse output of ethtool -g def _parse_ring_parameters(self, s): a = re.split(r"^Current hardware settings:$", s, flags=re.MULTILINE) s = a[1] s = self._cmd.multiple_re_replace(\ {"RX": "rx", "RX Mini": "rx-mini", "RX Jumbo": "rx-jumbo", "TX": "tx"}, s) l = s.split("\n") l = [x for x in l if x != ''] l = [x for x in [re.split(r":\s*", x) for x in l] if len(x) == 2] return dict(l) def _get_device_parameters(self, context, device): context2opt = { "coalesce": "-c", "features": "-k", "pause": "-a", "ring": "-g" } opt = context2opt[context] ret, value = self._cmd.execute(["ethtool", opt, device]) if ret != 0 or len(value) == 0: return None context2parser = { "coalesce": self._parse_device_parameters, \ "features": self._parse_device_parameters, \ "pause": self._parse_pause_parameters, \ "ring": self._parse_ring_parameters } parser = context2parser[context] d = parser(value) if context == "coalesce" and not self._check_parameters(context, d): return None return d def _set_device_parameters(self, context, value, device, sim): if value is None or len(value) == 0: return None d = self._parse_config_parameters(value, context) if d is None or not self._check_parameters(context, d): return None if not sim: log.debug("setting %s: %s" % (context, str(d))) context2opt = { "coalesce": "-C", "features": "-K", "pause": "-A", "ring": "-G" } opt = context2opt[context] # ignore ethtool return code 80, it means parameter is already set self._cmd.execute(["ethtool", opt, device] + self._cmd.dict2list(d), no_errors = [80]) return d def _custom_parameters(self, context, start, value, device, verify): storage_key = self._storage_key( command_name = context, device_name = device) if start: cd = self._get_device_parameters(context, device) d = self._set_device_parameters(context, value, device, verify) # backup only parameters which are changed sd = dict([k_v for k_v in list(cd.items()) if k_v[0] in d]) if len(d) != len(sd): log.error("unable to save previous %s, wanted to save: '%s', but read: '%s'" % \ (context, str(list(d.keys())), str(list(cd.items())))) return False if verify: return self._cmd.dict2list(d) == self._cmd.dict2list(sd) self._storage.set(storage_key," ".join(self._cmd.dict2list(sd))) else: if not verify: original_value = self._storage.get(storage_key) self._set_device_parameters(context, original_value, device, False) return None @command_custom("features", per_device = True) def _features(self, start, value, device, verify, ignore_missing): return self._custom_parameters("features", start, value, device, verify) @command_custom("coalesce", per_device = True) def _coalesce(self, start, value, device, verify, ignore_missing): return self._custom_parameters("coalesce", start, value, device, verify) @command_custom("pause", per_device = True) def _pause(self, start, value, device, verify, ignore_missing): return self._custom_parameters("pause", start, value, device, verify) @command_custom("ring", per_device = True) def _ring(self, start, value, device, verify, ignore_missing): return self._custom_parameters("ring", start, value, device, verify) tuned-2.10.0/tuned/plugins/plugin_scheduler.py000066400000000000000000000560411331721725100214270ustar00rootroot00000000000000# code for cores isolation was inspired by Tuna implementation # perf code was borrowed from kernel/tools/perf/python/twatch.py # thanks to Arnaldo Carvalho de Melo from . import base from .decorators import * import tuned.logs import re from subprocess import * import threading import perf import select import tuned.consts as consts import procfs import schedutils from tuned.utils.commands import commands import errno log = tuned.logs.get() class SchedulerParams(object): def __init__(self, cmd, cmdline = None, scheduler = None, priority = None, affinity = None): self._cmd = cmd self.cmdline = cmdline self.scheduler = scheduler self.priority = priority self.affinity = affinity @property def affinity(self): if self._affinity is None: return None else: return self._cmd.bitmask2cpulist(self._affinity) @affinity.setter def affinity(self, value): if value is None: self._affinity = None else: self._affinity = self._cmd.cpulist2bitmask(value) class IRQAffinities(object): def __init__(self): self.irqs = {} self.default = None class SchedulerPlugin(base.Plugin): """ Plugin for tuning of scheduler. Currently it can control scheduling priorities of system threads (it is substitution for the rtctl tool). """ _dict_schedcfg2num = { "f": schedutils.SCHED_FIFO, "b": schedutils.SCHED_BATCH, "r": schedutils.SCHED_RR, "o": schedutils.SCHED_OTHER, "i": schedutils.SCHED_IDLE, } def __init__(self, monitor_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, global_cfg, variables): super(SchedulerPlugin, self).__init__(monitor_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, global_cfg, variables) self._has_dynamic_options = True self._daemon = consts.CFG_DEF_DAEMON self._sleep_interval = int(consts.CFG_DEF_SLEEP_INTERVAL) if global_cfg is not None: self._daemon = global_cfg.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) self._sleep_interval = int(global_cfg.get(consts.CFG_SLEEP_INTERVAL, consts.CFG_DEF_SLEEP_INTERVAL)) self._cmd = commands() # default is to whitelist all and blacklist none self._ps_whitelist = ".*" self._ps_blacklist = "" self._cpus = perf.cpu_map() self._scheduler_storage_key = self._storage_key( command_name = "scheduler") self._irq_storage_key = self._storage_key( command_name = "irq") def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True # this is hack, runtime_tuning should be covered by dynamic_tuning configuration # TODO: add per plugin dynamic tuning configuration and use dynamic_tuning configuration # instead of runtime_tuning instance._runtime_tuning = True # FIXME: do we want to do this here? # recover original values in case of crash self._scheduler_original = self._storage.get( self._scheduler_storage_key, {}) if len(self._scheduler_original) > 0: log.info("recovering scheduling settings from previous run") self._restore_ps_affinity() self._scheduler_original = {} self._storage.unset(self._scheduler_storage_key) instance._scheduler = instance.options for k in instance._scheduler: instance._scheduler[k] = self._variables.expand(instance._scheduler[k]) if self._cmd.get_bool(instance._scheduler.get("runtime", 1)) == "0": instance._runtime_tuning = False instance._terminate = threading.Event() if self._daemon and instance._runtime_tuning: try: instance._threads = perf.thread_map() evsel = perf.evsel(type = perf.TYPE_SOFTWARE, config = perf.COUNT_SW_DUMMY, task = 1, comm = 1, mmap = 0, freq = 0, wakeup_events = 1, watermark = 1, sample_type = perf.SAMPLE_TID | perf.SAMPLE_CPU) evsel.open(cpus = self._cpus, threads = instance._threads) instance._evlist = perf.evlist(self._cpus, instance._threads) instance._evlist.add(evsel) instance._evlist.mmap() # no perf except: instance._runtime_tuning = False def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "isolated_cores": None, "ps_whitelist": None, "ps_blacklist": None, } # Raises OSError, IOError def _get_cmdline(self, process): if not isinstance(process, procfs.process): pid = process process = procfs.process(pid) cmdline = procfs.process_cmdline(process) if self._is_kthread(process): cmdline = "[" + cmdline + "]" return cmdline # Raises OSError, IOError def get_processes(self): ps = procfs.pidstats() ps.reload_threads() processes = {} for proc in ps.values(): try: cmd = self._get_cmdline(proc) pid = proc["pid"] processes[pid] = cmd if "threads" in proc: for pid in proc["threads"].keys(): cmd = self._get_cmdline(proc) processes[pid] = cmd except (OSError, IOError) as e: if e.errno == errno.ENOENT \ or e.errno == errno.ESRCH: continue else: raise return processes # Raises OSError # Raises SystemError with old (pre-0.4) python-schedutils # instead of OSError # If PID doesn't exist, errno == ESRCH def _get_rt(self, pid): scheduler = schedutils.get_scheduler(pid) sched_str = schedutils.schedstr(scheduler) priority = schedutils.get_priority(pid) log.debug("Read scheduler policy '%s' and priority '%d' of PID '%d'" % (sched_str, priority, pid)) return (scheduler, priority) def _set_rt(self, pid, sched, prio): sched_str = schedutils.schedstr(sched) log.debug("Setting scheduler policy to '%s' and priority to '%d' of PID '%d'." % (sched_str, prio, pid)) try: prio_min = schedutils.get_priority_min(sched) prio_max = schedutils.get_priority_max(sched) if prio < prio_min or prio > prio_max: log.error("Priority for %s must be in range %d - %d. '%d' was given." % (sched_str, prio_min, prio_max, prio)) # Workaround for old (pre-0.4) python-schedutils which raised # SystemError instead of OSError except (SystemError, OSError) as e: log.error("Failed to get allowed priority range: %s" % e) try: schedutils.set_scheduler(pid, sched, prio) except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to set scheduling parameters of PID %d, the task vanished." % pid) else: log.error("Failed to set scheduling parameters of PID %d: %s" % (pid, e)) # process is a procfs.process object # Raises OSError, IOError def _is_kthread(self, process): return process["stat"]["flags"] & procfs.pidstat.PF_KTHREAD != 0 # Return codes: # 0 - Affinity is fixed # 1 - Affinity is changeable # -1 - Task vanished # -2 - Error def _affinity_changeable(self, pid): try: process = procfs.process(pid) if process["stat"].is_bound_to_cpu(): if process["stat"]["state"] == "Z": log.debug("Affinity of zombie task with PID %d cannot be changed, the task's affinity mask is fixed." % pid) elif self._is_kthread(process): log.debug("Affinity of kernel thread with PID %d cannot be changed, the task's affinity mask is fixed." % pid) else: log.warn("Affinity of task with PID %d cannot be changed, the task's affinity mask is fixed." % pid) return 0 else: return 1 except (OSError, IOError) as e: if e.errno == errno.ENOENT or e.errno == errno.ESRCH: log.debug("Failed to get task info for PID %d, the task vanished." % pid) return -1 else: log.error("Failed to get task info for PID %d: %s" % (pid, e)) return -2 except (AttributeError, KeyError) as e: log.error("Failed to get task info for PID %d: %s" % (pid, e)) return -2 def _store_orig_process_rt(self, pid, scheduler, priority): try: params = self._scheduler_original[pid] except KeyError: params = SchedulerParams(self._cmd) self._scheduler_original[pid] = params if params.scheduler is None and params.priority is None: params.scheduler = scheduler params.priority = priority def _tune_process_rt(self, pid, sched, prio): cont = True if sched is None and prio is None: return cont try: (prev_sched, prev_prio) = self._get_rt(pid) if sched is None: sched = prev_sched self._set_rt(pid, sched, prio) self._store_orig_process_rt(pid, prev_sched, prev_prio) except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to read scheduler policy of PID %d, the task vanished." % pid) if pid in self._scheduler_original: del self._scheduler_original[pid] cont = False else: log.error("Refusing to set scheduler and priority of PID %d, reading original scheduling parameters failed: %s" % (pid, e)) return cont def _store_orig_process_affinity(self, pid, affinity): try: params = self._scheduler_original[pid] except KeyError: params = SchedulerParams(self._cmd) self._scheduler_original[pid] = params if params.affinity is None: params.affinity = affinity def _tune_process_affinity(self, pid, affinity, intersect = False): cont = True if affinity is None: return cont try: prev_affinity = self._get_affinity(pid) if intersect: affinity = self._get_intersect_affinity( prev_affinity, affinity, affinity) self._set_affinity(pid, affinity) self._store_orig_process_affinity(pid, prev_affinity) except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to read affinity of PID %d, the task vanished." % pid) if pid in self._scheduler_original: del self._scheduler_original[pid] cont = False else: log.error("Refusing to set CPU affinity of PID %d, reading original affinity failed: %s" % (pid, e)) return cont #tune process and store previous values def _tune_process(self, pid, cmd, sched, prio, affinity): cont = self._tune_process_rt(pid, sched, prio) if not cont: return cont = self._tune_process_affinity(pid, affinity) if not cont or pid not in self._scheduler_original: return self._scheduler_original[pid].cmdline = cmd def _convert_sched_params(self, str_scheduler, str_priority): scheduler = self._dict_schedcfg2num.get(str_scheduler) if scheduler is None and str_scheduler != "*": log.error("Invalid scheduler: %s. Scheduler and priority will be ignored." % str_scheduler) return (None, None) else: try: priority = int(str_priority) except ValueError: log.error("Invalid priority: %s. Scheduler and priority will be ignored." % str_priority) return (None, None) return (scheduler, priority) def _convert_affinity(self, str_affinity): if str_affinity == "*": affinity = None else: affinity = self._cmd.hex2cpulist(str_affinity) if not affinity: log.error("Invalid affinity: %s. It will be ignored." % str_affinity) affinity = None return affinity def _convert_sched_cfg(self, vals): (rule_prio, scheduler, priority, affinity, regex) = vals (scheduler, priority) = self._convert_sched_params( scheduler, priority) affinity = self._convert_affinity(affinity) return (rule_prio, scheduler, priority, affinity, regex) def _instance_apply_static(self, instance): super(SchedulerPlugin, self)._instance_apply_static(instance) try: ps = self.get_processes() except (OSError, IOError) as e: log.error("error applying tuning, cannot get information about running processes: %s" % e) return sched_cfg = [(option, str(value).split(":", 4)) for option, value in instance._scheduler.items()] buf = [(option, self._convert_sched_cfg(vals)) for option, vals in sched_cfg if re.match(r"group\.", option) and len(vals) == 5] sched_cfg = sorted(buf, key=lambda option_vals: option_vals[1][0]) sched_all = dict() # for runtime tunning instance._sched_lookup = {} for option, (rule_prio, scheduler, priority, affinity, regex) \ in sched_cfg: try: r = re.compile(regex) except re.error as e: log.error("error compiling regular expression: '%s'" % str(regex)) continue processes = [(pid, cmd) for pid, cmd in ps.items() if re.search(r, cmd) is not None] #cmd - process name, option - group name sched = dict([(pid, (cmd, option, scheduler, priority, affinity, regex)) for pid, cmd in processes]) sched_all.update(sched) regex = str(regex).replace("(", r"\(") regex = regex.replace(")", r"\)") instance._sched_lookup[regex] = [scheduler, priority, affinity] for pid, (cmd, option, scheduler, priority, affinity, regex) \ in sched_all.items(): self._tune_process(pid, cmd, scheduler, priority, affinity) self._storage.set(self._scheduler_storage_key, self._scheduler_original) if self._daemon and instance._runtime_tuning: instance._thread = threading.Thread(target = self._thread_code, args = [instance]) instance._thread.start() def _restore_ps_affinity(self): try: ps = self.get_processes() except (OSError, IOError) as e: log.error("error unapplying tuning, cannot get information about running processes: %s" % e) return for pid, orig_params in self._scheduler_original.items(): # if command line for the pid didn't change, it's very probably the same process if pid not in ps or ps[pid] != orig_params.cmdline: continue if orig_params.scheduler is not None \ and orig_params.priority is not None: self._set_rt(pid, orig_params.scheduler, orig_params.priority) if orig_params.affinity is not None: self._set_affinity(pid, orig_params.affinity) self._scheduler_original = {} self._storage.unset(self._scheduler_storage_key) def _instance_unapply_static(self, instance, full_rollback = False): super(SchedulerPlugin, self)._instance_unapply_static(instance, full_rollback) if self._daemon and instance._runtime_tuning: instance._terminate.set() instance._thread.join() self._restore_ps_affinity() def _add_pid(self, instance, pid, r): try: cmd = self._get_cmdline(pid) except (OSError, IOError) as e: if e.errno == errno.ENOENT \ or e.errno == errno.ESRCH: log.debug("Failed to get cmdline of PID %d, the task vanished." % pid) else: log.error("Failed to get cmdline of PID %d: %s" % (pid, e)) return v = self._cmd.re_lookup(instance._sched_lookup, cmd, r) if v is not None and not pid in self._scheduler_original: log.debug("tuning new process '%s' with PID '%d' by '%s'" % (cmd, pid, str(v))) (sched, prio, affinity) = v self._tune_process(pid, cmd, sched, prio, affinity) self._storage.set(self._scheduler_storage_key, self._scheduler_original) def _remove_pid(self, instance, pid): if pid in self._scheduler_original: del self._scheduler_original[pid] log.debug("removed PID %d from the rollback database" % pid) self._storage.set(self._scheduler_storage_key, self._scheduler_original) def _thread_code(self, instance): r = self._cmd.re_lookup_compile(instance._sched_lookup) poll = select.poll() for fd in instance._evlist.get_pollfd(): poll.register(fd) while not instance._terminate.is_set(): # timeout to poll in milliseconds if len(poll.poll(self._sleep_interval * 1000)) > 0 and not instance._terminate.is_set(): read_events = True while read_events: read_events = False for cpu in self._cpus: event = instance._evlist.read_on_cpu(cpu) if event: read_events = True if event.type == perf.RECORD_COMM: self._add_pid(instance, int(event.tid), r) elif event.type == perf.RECORD_EXIT: self._remove_pid(instance, int(event.tid)) @command_custom("ps_whitelist", per_device = False) def _ps_whitelist(self, enabling, value, verify, ignore_missing): # currently unsupported if verify: return None if enabling and value is not None: self._ps_whitelist = "|".join(["(%s)" % v for v in re.split(r"(?= 0: if not sim: self._cmd.write_to_file(self._cache_threshold_path, threshold) return threshold else: return None @command_get("avc_cache_threshold") def _get_avc_cache_threshold(self): value = self._cmd.read_file(self._cache_threshold_path) if len(value) > 0: return int(value) return None tuned-2.10.0/tuned/plugins/plugin_sysctl.py000066400000000000000000000056471331721725100210000ustar00rootroot00000000000000import re from . import base from .decorators import * import tuned.logs from subprocess import * from tuned.utils.commands import commands import tuned.consts as consts log = tuned.logs.get() class SysctlPlugin(base.Plugin): """ Plugin for applying custom sysctl options. """ def __init__(self, *args, **kwargs): super(SysctlPlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True # FIXME: do we want to do this here? # recover original values in case of crash storage_key = self._storage_key(instance.name) instance._sysctl_original = self._storage.get(storage_key, {}) if len(instance._sysctl_original) > 0: log.info("recovering old sysctl settings from previous run") self._instance_unapply_static(instance) instance._sysctl_original = {} self._storage.unset(storage_key) instance._sysctl = instance.options def _instance_cleanup(self, instance): storage_key = self._storage_key(instance.name) self._storage.unset(storage_key) def _instance_apply_static(self, instance): for option, value in list(instance._sysctl.items()): original_value = self._read_sysctl(option) if original_value != None: instance._sysctl_original[option] = original_value self._write_sysctl(option, self._process_assignment_modifiers(self._variables.expand(self._cmd.unquote(value)), original_value)) storage_key = self._storage_key(instance.name) self._storage.set(storage_key, instance._sysctl_original) if self._global_cfg.get_bool(consts.CFG_REAPPLY_SYSCTL, consts.CFG_DEF_REAPPLY_SYSCTL): log.info("reapplying system sysctl") self._cmd.execute(["sysctl", "--system"]) def _instance_verify_static(self, instance, ignore_missing): ret = True # override, so always skip missing ignore_missing = True for option, value in list(instance._sysctl.items()): curr_val = self._read_sysctl(option) value = self._process_assignment_modifiers(self._variables.expand(value), curr_val) if value is not None: if self._verify_value(option, self._cmd.remove_ws(value), curr_val, ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): for option, value in list(instance._sysctl_original.items()): self._write_sysctl(option, value) def _execute_sysctl(self, arguments): execute = ["sysctl"] + arguments log.debug("executing %s" % execute) return self._cmd.execute(execute) def _read_sysctl(self, option): retcode, stdout = self._execute_sysctl(["-e", option]) if retcode == 0: parts = [self._cmd.remove_ws(value) for value in stdout.split("=", 1)] if len(parts) == 2: option, value = parts return value return None def _write_sysctl(self, option, value): retcode, stdout = self._execute_sysctl(["-q", "-w", "%s=%s" % (option, value)]) return retcode == 0 tuned-2.10.0/tuned/plugins/plugin_sysfs.py000066400000000000000000000040531331721725100206140ustar00rootroot00000000000000from . import base import glob import re import os.path from .decorators import * import tuned.logs from subprocess import * from tuned.utils.commands import commands log = tuned.logs.get() class SysfsPlugin(base.Plugin): """ Plugin for applying custom sysfs options, using specific plugins is preferred. """ # TODO: resolve possible conflicts with sysctl settings from other plugins def __init__(self, *args, **kwargs): super(SysfsPlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True instance._sysfs = dict([(os.path.normpath(key_value[0]), key_value[1]) for key_value in list(instance.options.items())]) instance._sysfs_original = {} def _instance_cleanup(self, instance): pass def _instance_apply_static(self, instance): for key, value in list(instance._sysfs.items()): v = self._variables.expand(value) for f in glob.iglob(key): if self._check_sysfs(f): instance._sysfs_original[f] = self._read_sysfs(f) self._write_sysfs(f, v) else: log.error("rejecting write to '%s' (not inside /sys)" % f) def _instance_verify_static(self, instance, ignore_missing): ret = True for key, value in list(instance._sysfs.items()): v = self._variables.expand(value) for f in glob.iglob(key): if self._check_sysfs(f): curr_val = self._read_sysfs(f) if self._verify_value(f, v, curr_val, ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): for key, value in list(instance._sysfs_original.items()): self._write_sysfs(key, value) def _check_sysfs(self, sysfs_file): return re.match(r"^/sys/.*", sysfs_file) def _read_sysfs(self, sysfs_file): data = self._cmd.read_file(sysfs_file).strip() if len(data) > 0: return self._cmd.get_active_option(data, False) else: return None def _write_sysfs(self, sysfs_file, value): return self._cmd.write_to_file(sysfs_file, value) tuned-2.10.0/tuned/plugins/plugin_systemd.py000066400000000000000000000115111331721725100211320ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from . import exceptions from tuned.utils.commands import commands import tuned.consts as consts import os import re log = tuned.logs.get() class SystemdPlugin(base.Plugin): """ Plugin for tuning systemd options. These tunings are unloaded only on profile change followed by reboot. """ def __init__(self, *args, **kwargs): if not os.path.isfile(consts.SYSTEMD_SYSTEM_CONF_FILE): raise exceptions.NotSupportedPluginException("Required systemd '%s' configuration file not found, disabling plugin." % consts.SYSTEMD_SYSTEM_CONF_FILE) super(SystemdPlugin, self).__init__(*args, **kwargs) self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "cpu_affinity": None, } def _get_keyval(self, conf, key): if conf is not None: mo = re.search(r"^\s*" + key + r"\s*=\s*(.*)$", conf, flags = re.MULTILINE) if mo is not None and mo.lastindex == 1: return mo.group(1) return None # add/replace key with the value def _add_keyval(self, conf, key, val): (conf_new, nsubs) = re.subn(r"^(\s*" + key + r"\s*=).*$", r"\g<1>" + str(val), conf, flags = re.MULTILINE) if nsubs < 1: try: if conf[-1] != "\n": conf += "\n" except IndexError: pass conf += key + "=" + str(val) + "\n" return conf return conf_new def _del_key(self, conf, key): return re.sub(r"^\s*" + key + r"\s*=.*\n", "", conf, flags = re.MULTILINE) def _read_systemd_system_conf(self): systemd_system_conf = self._cmd.read_file(consts.SYSTEMD_SYSTEM_CONF_FILE, err_ret = None) if systemd_system_conf == None: log.error("error reading systemd configuration file") return None return systemd_system_conf def _write_systemd_system_conf(self, conf): tmpfile = consts.SYSTEMD_SYSTEM_CONF_FILE + consts.TMP_FILE_SUFFIX if not self._cmd.write_to_file(tmpfile, conf): log.error("error writing systemd configuration file") self._cmd.unlink(tmpfile, no_error = True) return False # Atomic replace, this doesn't work on Windows (AFAIK there is no way on Windows how to do this # atomically), but it's unlikely this code will run there if not self._cmd.rename(tmpfile, consts.SYSTEMD_SYSTEM_CONF_FILE): log.error("error replacing systemd configuration file '%s'" % consts.SYSTEMD_SYSTEM_CONF_FILE) self._cmd.unlink(tmpfile, no_error = True) return False return True def _get_storage_filename(self): return os.path.join(consts.PERSISTENT_STORAGE_DIR, self.name) def _remove_systemd_tuning(self): conf = self._read_systemd_system_conf() if (conf is not None): fname = self._get_storage_filename() cpu_affinity_saved = self._cmd.read_file(fname, err_ret = None, no_error = True) self._cmd.unlink(fname) if cpu_affinity_saved is None: conf = self._del_key(conf, consts.SYSTEMD_CPUAFFINITY_VAR) else: conf = self._add_keyval(conf, consts.SYSTEMD_CPUAFFINITY_VAR, cpu_affinity_saved) self._write_systemd_system_conf(conf) def _instance_unapply_static(self, instance, full_rollback = False): if full_rollback: log.info("removing '%s' systemd tuning previously added by Tuned" % consts.SYSTEMD_CPUAFFINITY_VAR) self._remove_systemd_tuning() log.console("you may need to manualy run 'dracut -f' to update the systemd configuration in initrd image") # convert cpulist from systemd syntax to Tuned syntax and unpack it def _cpulist_convert_unpack(self, cpulist): if cpulist is None: return "" return " ".join(str(v) for v in self._cmd.cpulist_unpack(re.sub(r"\s+", r",", re.sub(r",\s+", r",", cpulist)))) @command_custom("cpu_affinity", per_device = False) def _cmdline(self, enabling, value, verify, ignore_missing): conf_affinity = None conf_affinity_unpacked = None v = self._cmd.unescape(self._variables.expand(self._cmd.unquote(value))) v_unpacked = " ".join(str(v) for v in self._cmd.cpulist_unpack(v)) conf = self._read_systemd_system_conf() if conf is not None: conf_affinity = self._get_keyval(conf, consts.SYSTEMD_CPUAFFINITY_VAR) conf_affinity_unpacked = self._cpulist_convert_unpack(conf_affinity) if verify: return self._verify_value("cpu_affinity", v_unpacked, conf_affinity_unpacked, ignore_missing) if enabling: fname = self._get_storage_filename() cpu_affinity_saved = self._cmd.read_file(fname, err_ret = None, no_error = True) if conf_affinity is not None and cpu_affinity_saved is None and v_unpacked != conf_affinity_unpacked: self._cmd.write_to_file(fname, conf_affinity, makedir = True) log.info("setting '%s' to '%s' in the '%s'" % (consts.SYSTEMD_CPUAFFINITY_VAR, v_unpacked, consts.SYSTEMD_SYSTEM_CONF_FILE)) self._write_systemd_system_conf(self._add_keyval(conf, consts.SYSTEMD_CPUAFFINITY_VAR, v_unpacked)) tuned-2.10.0/tuned/plugins/plugin_usb.py000066400000000000000000000027621331721725100202430ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import glob log = tuned.logs.get() class USBPlugin(base.Plugin): """ Plugin for tuning various options of USB subsystem. """ def _init_devices(self): self._devices_supported = True self._free_devices = set() self._assigned_devices = set() for device in self._hardware_inventory.get_devices("usb").match_property("DEVTYPE", "usb_device"): self._free_devices.add(device.sys_name) self._cmd = commands() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("usb", x) for x in devices] @classmethod def _get_config_options(self): return { "autosuspend" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass def _autosuspend_sysfile(self, device): return "/sys/bus/usb/devices/%s/power/autosuspend" % device @command_set("autosuspend", per_device=True) def _set_autosuspend(self, value, device, sim): enable = self._option_bool(value) if enable is None: return None val = "1" if enable else "0" if not sim: sys_file = self._autosuspend_sysfile(device) self._cmd.write_to_file(sys_file, val) return val @command_get("autosuspend") def _get_autosuspend(self, device, ignore_missing=False): sys_file = self._autosuspend_sysfile(device) return self._cmd.read_file(sys_file, no_error=ignore_missing).strip() tuned-2.10.0/tuned/plugins/plugin_video.py000066400000000000000000000054721331721725100205610ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import os import re log = tuned.logs.get() class VideoPlugin(base.Plugin): """ Plugin for tuning powersave options for some graphic cards. """ def _init_devices(self): self._devices_supported = True self._free_devices = set() self._assigned_devices = set() # FIXME: this is a blind shot, needs testing for device in self._hardware_inventory.get_devices("drm").match_sys_name("card*").match_property("DEVTYPE", "drm_minor"): self._free_devices.add(device.sys_name) self._cmd = commands() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("drm", x) for x in devices] @classmethod def _get_config_options(self): return { "radeon_powersave" : None, } def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass def _radeon_powersave_files(self, device): return { "method" : "/sys/class/drm/%s/device/power_method" % device, "profile": "/sys/class/drm/%s/device/power_profile" % device, "dpm_state": "/sys/class/drm/%s/device/power_dpm_state" % device } @command_set("radeon_powersave", per_device=True) def _set_radeon_powersave(self, value, device, sim): sys_files = self._radeon_powersave_files(device) va = str(re.sub(r"(\s*:\s*)|(\s+)|(\s*;\s*)|(\s*,\s*)", " ", value)).split() if not os.path.exists(sys_files["method"]): if not sim: log.warn("radeon_powersave is not supported on '%s'" % device) return None for v in va: if v in ["default", "auto", "low", "mid", "high"]: if not sim: if (self._cmd.write_to_file(sys_files["method"], "profile") and self._cmd.write_to_file(sys_files["profile"], v)): return v elif v == "dynpm": if not sim: if (self._cmd.write_to_file(sys_files["method"], "dynpm")): return "dynpm" # new DPM profiles, recommended to use if supported elif v in ["dpm-battery", "dpm-balanced", "dpm-performance"]: if not sim: state = v[len("dpm-"):] if (self._cmd.write_to_file(sys_files["method"], "dpm") and self._cmd.write_to_file(sys_files["dpm_state"], state)): return v else: if not sim: log.warn("Invalid option for radeon_powersave.") return None return None @command_get("radeon_powersave") def _get_radeon_powersave(self, device, ignore_missing = False): sys_files = self._radeon_powersave_files(device) method = self._cmd.read_file(sys_files["method"], no_error=ignore_missing).strip() if method == "profile": return self._cmd.read_file(sys_files["profile"]).strip() elif method == "dynpm": return method elif method == "dpm": return "dpm-" + self._cmd.read_file(sys_files["dpm_state"]).strip() else: return None tuned-2.10.0/tuned/plugins/plugin_vm.py000066400000000000000000000040301331721725100200620ustar00rootroot00000000000000from . import base from .decorators import * import tuned.logs import os import struct import glob from tuned.utils.commands import commands log = tuned.logs.get() cmd = commands() class VMPlugin(base.Plugin): """ Plugin for tuning memory management. """ @classmethod def _get_config_options(self): return { "transparent_hugepages" : None, "transparent_hugepage" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass @classmethod def _thp_file(self): path = "/sys/kernel/mm/transparent_hugepage/enabled" if not os.path.exists(path): path = "/sys/kernel/mm/redhat_transparent_hugepage/enabled" return path @command_set("transparent_hugepages") def _set_transparent_hugepages(self, value, sim): if value not in ["always", "never", "madvise"]: if not sim: log.warn("Incorrect 'transparent_hugepages' value '%s'." % str(value)) return None cmdline = cmd.read_file("/proc/cmdline", no_error = True) if cmdline.find("transparent_hugepage=") > 0: if not sim: log.info("transparent_hugepage is already set in kernel boot cmdline, ingoring value from profile") return None sys_file = self._thp_file() if os.path.exists(sys_file): if not sim: cmd.write_to_file(sys_file, value) return value else: if not sim: log.warn("Option 'transparent_hugepages' is not supported on current hardware.") return None # just an alias to transparent_hugepages @command_set("transparent_hugepage") def _set_transparent_hugepage(self, value, sim): self._set_transparent_hugepages(value, sim) @command_get("transparent_hugepages") def _get_transparent_hugepages(self): sys_file = self._thp_file() if os.path.exists(sys_file): return cmd.get_active_option(cmd.read_file(sys_file)) else: return None # just an alias to transparent_hugepages @command_get("transparent_hugepage") def _get_transparent_hugepage(self): return self._get_transparent_hugepages() tuned-2.10.0/tuned/plugins/repository.py000066400000000000000000000027721331721725100203140ustar00rootroot00000000000000from tuned.utils.plugin_loader import PluginLoader import tuned.plugins.base import tuned.logs log = tuned.logs.get() __all__ = ["Repository"] class Repository(PluginLoader): def __init__(self, monitor_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, global_cfg, variables): super(Repository, self).__init__() self._plugins = set() self._monitor_repository = monitor_repository self._storage_factory = storage_factory self._hardware_inventory = hardware_inventory self._device_matcher = device_matcher self._device_matcher_udev = device_matcher_udev self._plugin_instance_factory = plugin_instance_factory self._global_cfg = global_cfg self._variables = variables @property def plugins(self): return self._plugins def _set_loader_parameters(self): self._namespace = "tuned.plugins" self._prefix = "plugin_" self._interface = tuned.plugins.base.Plugin def create(self, plugin_name): log.debug("creating plugin %s" % plugin_name) plugin_cls = self.load_plugin(plugin_name) plugin_instance = plugin_cls(self._monitor_repository, self._storage_factory, self._hardware_inventory, self._device_matcher,\ self._device_matcher_udev, self._plugin_instance_factory, self._global_cfg, self._variables) self._plugins.add(plugin_instance) return plugin_instance def delete(self, plugin): assert isinstance(plugin, self._interface) log.debug("removing plugin %s" % plugin) plugin.cleanup() self._plugins.remove(plugin) tuned-2.10.0/tuned/profiles/000077500000000000000000000000001331721725100156555ustar00rootroot00000000000000tuned-2.10.0/tuned/profiles/__init__.py000066400000000000000000000004311331721725100177640ustar00rootroot00000000000000from tuned.profiles.locator import * from tuned.profiles.loader import * from tuned.profiles.profile import * from tuned.profiles.unit import * from tuned.profiles.exceptions import * from tuned.profiles.factory import * from tuned.profiles.merger import * from . import functions tuned-2.10.0/tuned/profiles/exceptions.py000066400000000000000000000001371331721725100204110ustar00rootroot00000000000000import tuned.exceptions class InvalidProfileException(tuned.exceptions.TunedException): pass tuned-2.10.0/tuned/profiles/factory.py000066400000000000000000000002151331721725100176740ustar00rootroot00000000000000import tuned.profiles.profile class Factory(object): def create(self, name, config): return tuned.profiles.profile.Profile(name, config) tuned-2.10.0/tuned/profiles/functions/000077500000000000000000000000001331721725100176655ustar00rootroot00000000000000tuned-2.10.0/tuned/profiles/functions/__init__.py000066400000000000000000000000431331721725100217730ustar00rootroot00000000000000from .repository import Repository tuned-2.10.0/tuned/profiles/functions/base.py000066400000000000000000000020201331721725100211430ustar00rootroot00000000000000import os import tuned.logs from tuned.utils.commands import commands log = tuned.logs.get() class Function(object): """ Built-in function """ def __init__(self, name, nargs_max, nargs_min = None): self._name = name self._nargs_max = nargs_max self._nargs_min = nargs_min self._cmd = commands() # checks arguments # nargs_max - maximal number of arguments, there mustn't be more arguments, # if nargs_max is 0, number of arguments is unlimited # nargs_min - minimal number of arguments, if not None there must # be the same number of arguments or more @classmethod def _check_args(cls, args, nargs_max, nargs_min = None): if args is None or nargs_max is None: return False la = len(args) return (nargs_max == 0 or nargs_max == la) and (nargs_min is None or nargs_min <= la) def execute(self, args): if self._check_args(args, self._nargs_max, self._nargs_min): return True else: log.error("invalid number of arguments for builtin function '%s'" % self._name) return False tuned-2.10.0/tuned/profiles/functions/function_assertion.py000066400000000000000000000013741331721725100241600ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands from tuned.profiles.exceptions import InvalidProfileException log = tuned.logs.get() class assertion(base.Function): """ Assertion: compares argument 2 with argument 3. If they don't match it logs text from argument 1 and throws InvalidProfileException. This exception will abort profile loading. """ def __init__(self): # 2 arguments super(assertion, self).__init__("assertion", 3) def execute(self, args): if not super(assertion, self).execute(args): return None if args[1] != args[2]: log.error("assertion '%s' failed: '%s' != '%s'" % (args[0], args[1], args[2])) raise InvalidProfileException("Assertion '%s' failed." % args[0]) return None tuned-2.10.0/tuned/profiles/functions/function_assertion_non_equal.py000066400000000000000000000014501331721725100262140ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands from tuned.profiles.exceptions import InvalidProfileException log = tuned.logs.get() class assertion_non_equal(base.Function): """ Assertion non equal: compares argument 2 with argument 3. If they match it logs text from argument 1 and throws InvalidProfileException. This exception will abort profile loading. """ def __init__(self): # 2 arguments super(assertion_non_equal, self).__init__("assertion_non_equal", 3) def execute(self, args): if not super(assertion_non_equal, self).execute(args): return None if args[1] == args[2]: log.error("assertion '%s' failed: '%s' == '%s'" % (args[0], args[1], args[2])) raise InvalidProfileException("Assertion '%s' failed." % args[0]) return None tuned-2.10.0/tuned/profiles/functions/function_cpulist2hex.py000066400000000000000000000007261331721725100244230ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist2hex(base.Function): """ Conversion function: converts CPU list to hexadecimal CPU mask """ def __init__(self): # arbitrary number of arguments super(cpulist2hex, self).__init__("cpulist2hex", 0) def execute(self, args): if not super(cpulist2hex, self).execute(args): return None return self._cmd.cpulist2hex(",,".join(args)) tuned-2.10.0/tuned/profiles/functions/function_cpulist2hex_invert.py000066400000000000000000000011601331721725100260030ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist2hex_invert(base.Function): """ Converts CPU list to hexadecimal CPU mask and inverts it """ def __init__(self): # arbitrary number of arguments super(cpulist2hex_invert, self).__init__("cpulist2hex_invert", 0) def execute(self, args): if not super(cpulist2hex_invert, self).execute(args): return None # current implementation inverts the CPU list and then converts it to hexmask return self._cmd.cpulist2hex(",".join(str(v) for v in self._cmd.cpulist_invert(",,".join(args)))) tuned-2.10.0/tuned/profiles/functions/function_cpulist_invert.py000066400000000000000000000012251331721725100252160ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_invert(base.Function): """ Inverts list of CPUs (makes its complement). For the complement it gets number of present CPUs from the /sys/devices/system/cpu/present, e.g. system with 4 CPUs (0-3), the inversion of list "0,2,3" will be "1" """ def __init__(self): # arbitrary number of arguments super(cpulist_invert, self).__init__("cpulist_invert", 0) def execute(self, args): if not super(cpulist_invert, self).execute(args): return None return ",".join(str(v) for v in self._cmd.cpulist_invert(",,".join(args))) tuned-2.10.0/tuned/profiles/functions/function_cpulist_online.py000066400000000000000000000012331331721725100251720ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_online(base.Function): """ Checks whether CPUs from list are online, returns list containing only online CPUs """ def __init__(self): # arbitrary number of arguments super(cpulist_online, self).__init__("cpulist_online", 0) def execute(self, args): if not super(cpulist_online, self).execute(args): return None cpus = self._cmd.cpulist_unpack(",".join(args)) online = self._cmd.cpulist_unpack(self._cmd.read_file("/sys/devices/system/cpu/online")) return ",".join(str(v) for v in set(cpus).intersection(set(online))) tuned-2.10.0/tuned/profiles/functions/function_cpulist_pack.py000066400000000000000000000011751331721725100246310ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_pack(base.Function): """ Conversion function: packs CPU list in form 1,2,3,5 to 1-3,5. The cpulist_unpack is used as a preprocessor, so it always returns optimal results. For details about input syntax see cpulist_unpack. """ def __init__(self): # arbitrary number of arguments super(cpulist_pack, self).__init__("cpulist_pack", 0) def execute(self, args): if not super(cpulist_pack, self).execute(args): return None return ",".join(str(v) for v in self._cmd.cpulist_pack(",,".join(args))) tuned-2.10.0/tuned/profiles/functions/function_cpulist_present.py000066400000000000000000000012631331721725100253710ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_present(base.Function): """ Checks whether CPUs from list are present, returns list containing only present CPUs """ def __init__(self): # arbitrary number of arguments super(cpulist_present, self).__init__("cpulist_present", 0) def execute(self, args): if not super(cpulist_present, self).execute(args): return None cpus = self._cmd.cpulist_unpack(",,".join(args)) present = self._cmd.cpulist_unpack(self._cmd.read_file("/sys/devices/system/cpu/present")) return ",".join(str(v) for v in sorted(list(set(cpus).intersection(set(present))))) tuned-2.10.0/tuned/profiles/functions/function_cpulist_unpack.py000066400000000000000000000007771331721725100252030ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_unpack(base.Function): """ Conversion function: unpacks CPU list in form 1-3,4 to 1,2,3,4 """ def __init__(self): # arbitrary number of arguments super(cpulist_unpack, self).__init__("cpulist_unpack", 0) def execute(self, args): if not super(cpulist_unpack, self).execute(args): return None return ",".join(str(v) for v in self._cmd.cpulist_unpack(",,".join(args))) tuned-2.10.0/tuned/profiles/functions/function_exec.py000066400000000000000000000007471331721725100231000ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands class execute(base.Function): """ Executes process and substitutes its output. """ def __init__(self): # unlimited number of arguments, min 1 argument (the name of executable) super(execute, self).__init__("exec", 0, 1) def execute(self, args): if not super(execute, self).execute(args): return None (ret, out) = self._cmd.execute(args) if ret == 0: return out return None tuned-2.10.0/tuned/profiles/functions/function_hex2cpulist.py000066400000000000000000000007271331721725100244240ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class hex2cpulist(base.Function): """ Conversion function: converts hexadecimal CPU mask to CPU list """ def __init__(self): # one argument super(hex2cpulist, self).__init__("hex2cpulist", 1) def execute(self, args): if not super(hex2cpulist, self).execute(args): return None return ",".join(str(v) for v in self._cmd.hex2cpulist(args[0])) tuned-2.10.0/tuned/profiles/functions/function_kb2s.py000066400000000000000000000006241331721725100230070ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands class kb2s(base.Function): """ Conversion function: kbytes to sectors """ def __init__(self): # one argument super(kb2s, self).__init__("kb2s", 1) def execute(self, args): if not super(kb2s, self).execute(args): return None try: return str(int(args[0]) * 2) except ValueError: return None tuned-2.10.0/tuned/profiles/functions/function_s2kb.py000066400000000000000000000006241331721725100230070ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands class s2kb(base.Function): """ Conversion function: sectors to kbytes """ def __init__(self): # one argument super(s2kb, self).__init__("s2kb", 1) def execute(self, args): if not super(s2kb, self).execute(args): return None try: return str(int(args[0]) / 2) except ValueError: return None tuned-2.10.0/tuned/profiles/functions/function_strip.py000066400000000000000000000006261331721725100233110ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands class strip(base.Function): """ Makes string from all arguments and strip it """ def __init__(self): # unlimited number of arguments, min 1 argument super(strip, self).__init__("strip", 0, 1) def execute(self, args): if not super(strip, self).execute(args): return None return "".join(args).strip() tuned-2.10.0/tuned/profiles/functions/function_virt_check.py000066400000000000000000000011201331721725100242570ustar00rootroot00000000000000import os import tuned.logs from . import base from tuned.utils.commands import commands class virt_check(base.Function): """ Checks whether running inside virtual machine (VM) or on bare metal. If running inside VM expands to argument 1, otherwise expands to argument 2 (even on error). """ def __init__(self): # 2 arguments super(virt_check, self).__init__("virt_check", 2) def execute(self, args): if not super(virt_check, self).execute(args): return None (ret, out) = self._cmd.execute(["virt-what"]) if ret == 0 and len(out) > 0: return args[0] return args[1] tuned-2.10.0/tuned/profiles/functions/functions.py000066400000000000000000000041411331721725100222470ustar00rootroot00000000000000import os import re import glob from . import repository import tuned.logs import tuned.consts as consts from tuned.utils.commands import commands log = tuned.logs.get() cmd = commands() class Functions(): """ Built-in functions """ def __init__(self): self._repository = repository.Repository() self._parse_init() def _parse_init(self, s = ""): self._cnt = 0 self._str = s self._len = len(s) self._stack = [] self._esc = False def _curr_char(self): return self._str[self._cnt] if self._cnt < self._len else "" def _curr_substr(self, _len): return self._str[self._cnt:self._cnt + _len] def _push_pos(self, esc): self._stack.append((esc, self._cnt)) def _sub(self, a, b, s): self._str = self._str[:a] + s + self._str[b + 1:] self._len = len(self._str) self._cnt += len(s) - (b - a + 1) if self._cnt < 0: self._cnt = 0 def _process_func(self, _from): sl = re.split(r'(? 1: log.info("loading profiles: %s" % ", ".join(profile_names)) else: log.info("loading profile: %s" % profile_names[0]) profiles = [] processed_files = [] self._load_profile(profile_names, profiles, processed_files) if len(profiles) > 1: final_profile = self._profile_merger.merge(profiles) else: final_profile = profiles[0] final_profile.name = " ".join(profile_names) if "variables" in final_profile.units: self._variables.add_from_cfg(final_profile.units["variables"].options) del(final_profile.units["variables"]) # FIXME hack, do all variable expansions in one place self._expand_vars_in_devices(final_profile) return final_profile def _expand_vars_in_devices(self, profile): for unit in profile.units: profile.units[unit].devices = self._variables.expand(profile.units[unit].devices) def _load_profile(self, profile_names, profiles, processed_files): for name in profile_names: filename = self._profile_locator.get_config(name, processed_files) if filename is None: raise InvalidProfileException("Cannot find profile '%s' in '%s'." % (name, list(reversed(self._profile_locator._load_directories)))) processed_files.append(filename) config = self._load_config_data(filename) profile = self._profile_factory.create(name, config) if "include" in profile.options: include_name = self._variables.expand(profile.options.pop("include")) self._load_profile([include_name], profiles, processed_files) profiles.append(profile) def _expand_profile_dir(self, profile_dir, string): return re.sub(r'(? '%s'" % (f, old_value)) return old_value def rmtree(self, f, no_error = False): self._debug("Removing tree: '%s'" % f) if os.path.exists(f): try: shutil.rmtree(f, no_error) except OSError as error: if not no_error: log.error("cannot remove tree '%s': '%s'" % (f, str(error))) return False return True def unlink(self, f, no_error = False): self._debug("Removing file: '%s'" % f) if os.path.exists(f): try: os.unlink(f) except OSError as error: if not no_error: log.error("cannot remove file '%s': '%s'" % (f, str(error))) return False return True def rename(self, src, dst, no_error = False): self._debug("Renaming file '%s' to '%s'" % (src, dst)) try: os.rename(src, dst) except OSError as error: if not no_error: log.error("cannot rename file '%s' to '%s': '%s'" % (src, dst, str(error))) return False return True def copy(self, src, dst, no_error = False): try: log.debug("copying file '%s' to '%s'" % (src, dst)) shutil.copy(src, dst) return True except IOError as e: if not no_error: log.error("cannot copy file '%s' to '%s': %s" % (src, dst, e)) return False def replace_in_file(self, f, pattern, repl): data = self.read_file(f) if len(data) <= 0: return False; return self.write_to_file(f, re.sub(pattern, repl, data, flags = re.MULTILINE)) # do multiple replaces in file 'f' by using dictionary 'd', # e.g.: d = {"re1": val1, "re2": val2, ...} def multiple_replace_in_file(self, f, d): data = self.read_file(f) if len(data) <= 0: return False; return self.write_to_file(f, self.multiple_re_replace(d, data, flags = re.MULTILINE)) # makes sure that options from 'd' are set to values from 'd' in file 'f', # when needed it edits options or add new options if they don't # exist and 'add' is set to True, 'd' has the following form: # d = {"option_1": value_1, "option_2": value_2, ...} def add_modify_option_in_file(self, f, d, add = True): data = self.read_file(f) for opt in d: o = str(opt) v = str(d[opt]) if re.search(r"\b" + o + r"\s*=.*$", data, flags = re.MULTILINE) is None: if add: if len(data) > 0 and data[-1] != "\n": data += "\n" data += "%s=\"%s\"\n" % (o, v) else: data = re.sub(r"\b(" + o + r"\s*=).*$", r"\1" + "\"" + v + "\"", data, flags = re.MULTILINE) return self.write_to_file(f, data) # "no_errors" can be list of return codes not treated as errors, if 0 is in no_errors, it means any error # returns (retcode, out), where retcode is exit code of the executed process or -errno if # OSError or IOError exception happened def execute(self, args, shell = False, cwd = None, no_errors = [], return_err = False): retcode = 0 if self._environment is None: self._environment = os.environ.copy() self._environment["LC_ALL"] = "C" self._debug("Executing %s." % str(args)) out = "" err_msg = None try: proc = Popen(args, stdout = PIPE, stderr = PIPE, \ env = self._environment, \ shell = shell, cwd = cwd, \ close_fds = True, \ universal_newlines = True) out, err = proc.communicate() retcode = proc.returncode if retcode and not retcode in no_errors and not 0 in no_errors: err_out = err[:-1] if len(err_out) == 0: err_out = out[:-1] err_msg = "Executing %s error: %s" % (args[0], err_out) if not return_err: self._error(err_msg) except (OSError, IOError) as e: retcode = -e.errno if e.errno is not None else -1 if not abs(retcode) in no_errors and not 0 in no_errors: err_msg = "Executing %s error: %s" % (args[0], e) if not return_err: self._error(err_msg) if return_err: return retcode, out, err_msg else: return retcode, out # Helper for parsing kernel options like: # [always] never # It will return 'always' def get_active_option(self, options, dosplit = True): m = re.match(r'.*\[([^\]]+)\].*', options) if m: return m.group(1) if dosplit: return options.split()[0] return options # Checks whether CPU is online def is_cpu_online(self, cpu): scpu = str(cpu) # CPU0 is always online return cpu == "0" or self.read_file("/sys/devices/system/cpu/cpu%s/online" % scpu, no_error = True).strip() == "1" # Converts hexadecimal CPU mask to CPU list def hex2cpulist(self, mask): if mask is None: return None mask = str(mask).replace(",", "") try: m = int(mask, 16) except ValueError: log.error("invalid hexadecimal mask '%s'" % str(mask)) return [] return self.bitmask2cpulist(m) # Converts an integer bitmask to a list of cpus (e.g. [0,3,4]) def bitmask2cpulist(self, mask): cpu = 0 cpus = [] while mask > 0: if mask & 1: cpus.append(cpu) mask >>= 1 cpu += 1 return cpus # Unpacks CPU list, i.e. 1-3 will be converted to 1, 2, 3, supports # hexmasks that needs to be prefixed by "0x". Hexmasks can have commas, # which will be removed. If combining hexmasks with CPU list they need # to be separated by ",,", e.g.: 0-3, 0xf,, 6. It also supports negation # cpus by specifying "^" or "!", e.g.: 0-5, ^3, will output the list as # "0,1,2,4,5" (excluding 3). Note: negation supports only cpu numbers. def cpulist_unpack(self, l): rl = [] if l is None: return l if type(l) is list: ll = l else: ll = str(l).split(",") ll2 = [] negation_list = [] hexmask = False hv = "" # Remove commas from hexmasks for v in ll: sv = str(v) if hexmask: if len(sv) == 0: hexmask = False ll2.append(hv) hv = "" else: hv += sv else: if sv[0:2].lower() == "0x": hexmask = True hv = sv elif sv and (sv[0] == "^" or sv[0] == "!"): negation_list.append(int(sv[1:])) else: if len(sv) > 0: ll2.append(sv) if len(hv) > 0: ll2.append(hv) for v in ll2: vl = v.split("-") if v[0:2].lower() == "0x": rl += self.hex2cpulist(v) else: try: if len(vl) > 1: rl += list(range(int(vl[0]), int(vl[1]) + 1)) else: rl.append(int(vl[0])) except ValueError: return [] cpu_list = sorted(list(set(rl))) # Remove negated cpus after expanding for cpu in negation_list: if cpu in cpu_list: cpu_list.remove(cpu) return cpu_list # Packs CPU list, i.e. 1, 2, 3 will be converted to 1-3. It unpacks the # CPU list through cpulist_unpack first, so see its description about the # details of the input syntax def cpulist_pack(self, l): l = self.cpulist_unpack(l) if l is None or len(l) == 0: return l i = 0 j = i rl = [] while i + 1 < len(l): if l[i + 1] - l[i] != 1: if j != i: rl.append(str(l[j]) + "-" + str(l[i])) else: rl.append(str(l[i])) j = i + 1 i += 1 if j + 1 < len(l): rl.append(str(l[j]) + "-" + str(l[-1])) else: rl.append(str(l[-1])) return rl # Inverts CPU list (i.e. makes its complement) def cpulist_invert(self, l): cpus = self.cpulist_unpack(l) present = self.cpulist_unpack(self.read_file("/sys/devices/system/cpu/present")) return list(set(present) - set(cpus)) # Converts CPU list to hexadecimal CPU mask def cpulist2hex(self, l): if l is None: return None ul = self.cpulist_unpack(l) if ul is None: return None m = self.cpulist2bitmask(ul) s = "%x" % m ls = len(s) if ls % 8 != 0: ls += 8 - ls % 8 s = s.zfill(ls) return ",".join(s[i:i + 8] for i in range(0, len(s), 8)) def cpulist2bitmask(self, l): m = 0 for v in l: m |= pow(2, v) return m def process_recommend_file(self, fname): matching_profile = None try: if not os.path.isfile(fname): return None config = ConfigObj(fname, list_values = False, interpolation = False) for section in list(config.keys()): match = True for option in list(config[section].keys()): value = config[section][option] if value == "": value = r"^$" if option == "virt": if not re.match(value, self.execute("virt-what")[1], re.S): match = False elif option == "system": if not re.match(value, self.read_file(consts.SYSTEM_RELEASE_FILE), re.S): match = False elif option[0] == "/": if not os.path.exists(option) or not re.match(value, self.read_file(option), re.S): match = False elif option[0:7] == "process": ps = procfs.pidstats() ps.reload_threads() if len(ps.find_by_regex(re.compile(value))) == 0: match = False if match: # remove the ",.*" suffix r = re.compile(r",[^,]*$") matching_profile = r.sub("", section) break except (IOError, OSError, ConfigObjError) as e: log.error("error processing '%s', %s" % (fname, e)) return matching_profile def recommend_profile(self, hardcoded = False): profile = consts.DEFAULT_PROFILE if hardcoded: return profile matching = self.process_recommend_file(consts.RECOMMEND_CONF_FILE) if matching is not None: return matching files = {} for directory in consts.RECOMMEND_DIRECTORIES: contents = [] try: contents = os.listdir(directory) except OSError as e: if e.errno != errno.ENOENT: log.error("error accessing %s: %s" % (directory, e)) for name in contents: path = os.path.join(directory, name) files[name] = path for name in sorted(files.keys()): path = files[name] matching = self.process_recommend_file(path) if matching is not None: return matching return profile # Do not make balancing on patched Python 2 interpreter (rhbz#1028122). # It means less CPU usage on patchet interpreter. On non-patched interpreter # it is not allowed to sleep longer than 50 ms. def wait(self, terminate, time): try: return terminate.wait(time, False) except: return terminate.wait(time) def get_size(self, s): s = str(s).strip().upper() for unit in ["KB", "MB", "GB", ""]: unit_ix = s.rfind(unit) if unit_ix == -1: continue try: val = int(s[:unit_ix]) u = s[unit_ix:] if u == "KB": val *= 1024 elif u == "MB": val *= 1024 * 1024 elif u == "GB": val *= 1024 * 1024 * 1024 elif u != "": val = None return val except ValueError: return None def get_active_profile(self): profile_name = "" mode = "" try: with open(consts.ACTIVE_PROFILE_FILE, "r") as f: profile_name = f.read().strip() except IOError as e: if e.errno != errno.ENOENT: raise TunedException("Failed to read active profile: %s" % e) except (OSError, EOFError) as e: raise TunedException("Failed to read active profile: %s" % e) try: with open(consts.PROFILE_MODE_FILE, "r") as f: mode = f.read().strip() if mode not in ["", consts.ACTIVE_PROFILE_AUTO, consts.ACTIVE_PROFILE_MANUAL]: raise TunedException("Invalid value in file %s." % consts.PROFILE_MODE_FILE) except IOError as e: if e.errno != errno.ENOENT: raise TunedException("Failed to read profile mode: %s" % e) except (OSError, EOFError) as e: raise TunedException("Failed to read profile mode: %s" % e) if mode == "": manual = None else: manual = mode == consts.ACTIVE_PROFILE_MANUAL if profile_name == "": profile_name = None return (profile_name, manual) def save_active_profile(self, profile_name, manual): try: with open(consts.ACTIVE_PROFILE_FILE, "w") as f: if profile_name is not None: f.write(profile_name + "\n") except (OSError,IOError) as e: raise TunedException("Failed to save active profile: %s" % e.strerror) try: with open(consts.PROFILE_MODE_FILE, "w") as f: mode = consts.ACTIVE_PROFILE_MANUAL if manual else consts.ACTIVE_PROFILE_AUTO f.write(mode + "\n") except (OSError,IOError) as e: raise TunedException("Failed to save profile mode: %s" % e.strerror) tuned-2.10.0/tuned/utils/global_config.py000066400000000000000000000036561331721725100203430ustar00rootroot00000000000000import tuned.logs from configobj import ConfigObj, ConfigObjError from validate import Validator from tuned.exceptions import TunedException import tuned.consts as consts from tuned.utils.commands import commands __all__ = ["GlobalConfig"] log = tuned.logs.get() class GlobalConfig(): global_config_spec = ["dynamic_tuning = boolean(default=%s)" % consts.CFG_DEF_DYNAMIC_TUNING, "sleep_interval = integer(default=%s)" % consts.CFG_DEF_SLEEP_INTERVAL, "update_interval = integer(default=%s)" % consts.CFG_DEF_UPDATE_INTERVAL, "recommend_command = boolean(default=%s)" % consts.CFG_DEF_RECOMMEND_COMMAND] def __init__(self): self._cfg = {} self.load_config() self._cmd = commands() def load_config(self, file_name = consts.GLOBAL_CONFIG_FILE): """ Loads global configuration file. """ log.debug("reading and parsing global configuration file '%s'" % file_name) try: self._cfg = ConfigObj(file_name, configspec = self.global_config_spec, raise_errors = True, \ file_error = True, list_values = False, interpolation = False) except IOError as e: raise TunedException("Global tuned configuration file '%s' not found." % file_name) except ConfigObjError as e: raise TunedException("Error parsing global tuned configuration file '%s'." % file_name) vdt = Validator() if (not self._cfg.validate(vdt, copy=True)): raise TunedException("Global tuned configuration file '%s' is not valid." % file_name) def get(self, key, default = None): return self._cfg.get(key, default) def get_bool(self, key, default = None): if self._cmd.get_bool(self.get(key, default)) == "1": return True return False def set(self, key, value): self._cfg[key] = value def get_size(self, key, default = None): val = self.get(key) if val is None: return default ret = self._cmd.get_size(val) if ret is None: log.error("Error parsing value '%s', using '%s'." %(val, default)) return default else: return ret tuned-2.10.0/tuned/utils/nettool.py000066400000000000000000000131061331721725100172310ustar00rootroot00000000000000__all__ = ["ethcard"] import tuned.logs from subprocess import * import re log = tuned.logs.get() class Nettool: _advertise_values = { # [ half, full ] 10 : [ 0x001, 0x002 ], 100 : [ 0x004, 0x008 ], 1000 : [ 0x010, 0x020 ], 2500 : [ 0, 0x8000 ], 10000 : [ 0, 0x1000 ], "auto" : 0x03F } _disabled = False def __init__(self, interface): self._interface = interface; self.update() log.debug("%s: speed %s, full duplex %s, autoneg %s, link %s" % (interface, self.speed, self.full_duplex, self.autoneg, self.link)) log.debug("%s: supports: autoneg %s, modes %s" % (interface, self.supported_autoneg, self.supported_modes)) log.debug("%s: advertises: autoneg %s, modes %s" % (interface, self.advertised_autoneg, self.advertised_modes)) # def __del__(self): # if self.supported_autoneg: # self._set_advertise(self._advertise_values["auto"]) def _clean_status(self): self.speed = 0 self.full_duplex = False self.autoneg = False self.link = False self.supported_modes = [] self.supported_autoneg = False self.advertised_modes = [] self.advertised_autoneg = False def _calculate_mode(self, modes): mode = 0; for m in modes: mode += self._advertise_values[m[0]][ 1 if m[1] else 0 ] return mode def _set_autonegotiation(self, enable): if self.autoneg == enable: return True if not self.supported_autoneg: return False return 0 == call(["ethtool", "-s", self._interface, "autoneg", "on" if enable else "off"], close_fds=True) def _set_advertise(self, value): if not self._set_autonegotiation(True): return False return 0 == call(["ethtool", "-s", self._interface, "advertise", "0x%03x" % value], close_fds=True) def get_max_speed(self): max = 0 for mode in self.supported_modes: if mode[0] > max: max = mode[0] if max > 0: return max else: return 1000 def set_max_speed(self): if self._disabled or not self.supported_autoneg: return False #if self._set_advertise(self._calculateMode(self.supported_modes)): if self._set_advertise(self._advertise_values["auto"]): self.update() return True else: return False def set_speed(self, speed): if self._disabled or not self.supported_autoneg: return False mode = 0 for am in self._advertise_values: if am == "auto": continue if am <= speed: mode += self._advertise_values[am][0]; mode += self._advertise_values[am][1]; effective_mode = mode & self._calculate_mode(self.supported_modes) log.debug("%s: set_speed(%d) - effective_mode 0x%03x" % (self._interface, speed, effective_mode)) if self._set_advertise(effective_mode): self.update() return True else: return False def update(self): if self._disabled: return # run ethtool and preprocess output p_ethtool = Popen(["ethtool", self._interface], \ stdout=PIPE, stderr=PIPE, close_fds=True, \ universal_newlines = True) p_filter = Popen(["sed", "s/^\s*//;s/:\s*/:\\n/g"], \ stdin=p_ethtool.stdout, stdout=PIPE, \ universal_newlines = True, \ close_fds=True) output = p_filter.communicate()[0] errors = p_ethtool.communicate()[1] if errors != "": log.warning("%s: some errors were reported by 'ethtool'" % self._interface) log.debug("%s: %s" % (self._interface, errors.replace("\n", r"\n"))) self._clean_status() self._disabled = True return # parses output - kind of FSM self._clean_status() re_speed = re.compile(r"(\d+)") re_mode = re.compile(r"(\d+)baseT/(Half|Full)") state = "wait" for line in output.split("\n"): if line.endswith(":"): section = line[:-1] if section == "Speed": state = "speed" elif section == "Duplex": state = "duplex" elif section == "Auto-negotiation": state = "autoneg" elif section == "Link detected": state = "link" elif section == "Supported link modes": state = "supported_modes" elif section == "Supports auto-negotiation": state = "supported_autoneg" elif section == "Advertised link modes": state = "advertised_modes" elif section == "Advertised auto-negotiation": state = "advertised_autoneg" else: state = "wait" del section elif state == "speed": # Try to determine speed. If it fails, assume 1gbit ethernet try: self.speed = re_speed.match(line).group(1) except: self.speed = 1000 state = "wait" elif state == "duplex": self.full_duplex = line == "Full" state = "wait" elif state == "autoneg": self.autoneg = (line == "yes" or line == "on") state = "wait" elif state == "link": self.link = line == "yes" state = "wait" elif state == "supported_modes": # Try to determine supported modes. If it fails, assume 1gibt ethernet fullduplex works try: for m in line.split(): (s, d) = re_mode.match(m).group(1,2) self.supported_modes.append( (int(s), d == "Full") ) del m,s,d except: self.supported_modes.append((1000, True)) elif state == "supported_autoneg": self.supported_autoneg = line == "Yes" state = "wait" elif state == "advertised_modes": # Try to determine advertised modes. If it fails, assume 1gibt ethernet fullduplex works try: if line != "Not reported": for m in line.split(): (s, d) = re_mode.match(m).group(1,2) self.advertised_modes.append( (int(s), d == "Full") ) del m,s,d except: self.advertised_modes.append((1000, True)) elif state == "advertised_autoneg": self.advertised_autoneg = line == "Yes" state = "wait" def ethcard(interface): if not interface in ethcard.list: ethcard.list[interface] = Nettool(interface) return ethcard.list[interface] ethcard.list = {} tuned-2.10.0/tuned/utils/plugin_loader.py000066400000000000000000000023761331721725100204000ustar00rootroot00000000000000import tuned.logs __all__ = ["PluginLoader"] log = tuned.logs.get() class PluginLoader(object): __slots__ = ["_namespace", "_prefix", "_interface"] def _set_loader_parameters(self): """ This method has to be implemented in child class and should set _namespace, _prefix, and _interface member attributes. """ raise NotImplementedError() def __init__(self): super(PluginLoader, self).__init__() self._namespace = None self._prefix = None self._interface = None self._set_loader_parameters() assert type(self._namespace) is str assert type(self._prefix) is str assert type(self._interface) is type and issubclass(self._interface, object) def load_plugin(self, plugin_name): assert type(plugin_name) is str module_name = "%s.%s%s" % (self._namespace, self._prefix, plugin_name) return self._get_class(module_name) def _get_class(self, module_name): log.debug("loading module %s" % module_name) module = __import__(module_name) path = module_name.split(".") path.pop(0) while len(path) > 0: module = getattr(module, path.pop(0)) for name in module.__dict__: cls = getattr(module, name) if type(cls) is type and issubclass(cls, self._interface): return cls raise ImportError("Cannot find the plugin class.") tuned-2.10.0/tuned/utils/polkit.py000066400000000000000000000026271331721725100170550ustar00rootroot00000000000000import dbus import tuned.logs log = tuned.logs.get() class polkit(): def __init__(self): self._bus = dbus.SystemBus() self._proxy = self._bus.get_object('org.freedesktop.PolicyKit1', '/org/freedesktop/PolicyKit1/Authority', follow_name_owner_changes = True) self._authority = dbus.Interface(self._proxy, dbus_interface='org.freedesktop.PolicyKit1.Authority') def check_authorization(self, sender, action_id): """Check authorization, return codes: 1 - authorized 2 - polkit error, but authorized with fallback method 0 - unauthorized -1 - polkit error and unauthorized by the fallback method -2 - polkit error and unable to use the fallback method """ if sender is None or action_id is None: return False details = {} flags = 1 # AllowUserInteraction flag cancellation_id = "" # No cancellation id subject = ("system-bus-name", {"name" : sender}) try: ret = self._authority.CheckAuthorization(subject, action_id, details, flags, cancellation_id)[0] except (dbus.exceptions.DBusException, ValueError) as e: log.error("error querying polkit: %s" % e) # No polkit or polkit error, fallback to always allow root try: uid = self._bus.get_unix_user(sender) except dbus.exceptions.DBusException as e: log.error("error using falback authorization method: %s" % e) return -2 if uid == 0: return 2 else: return -1 return 1 if ret else 0 tuned-2.10.0/tuned/version.py000066400000000000000000000001111331721725100160620ustar00rootroot00000000000000TUNED_VERSION_MAJOR = 2 TUNED_VERSION_MINOR = 10 TUNED_VERSION_PATCH = 0