Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • varac/stackspin
  • xeruf/stackspin
  • stackspin/stackspin
3 results
Show changes
Showing
with 3 additions and 3999 deletions
all: all:
hosts: hosts:
oas-dev: stackspin-dev:
# Node to deploy to; can be a name or address. # Node to deploy to; can be a name or address.
ansible_host: "203.0.113.6" ansible_host: "203.0.113.6"
# Ssh user to log in as. # Ssh user to log in as.
...@@ -8,7 +8,7 @@ all: ...@@ -8,7 +8,7 @@ all:
children: children:
master: master:
hosts: hosts:
oas-dev stackspin-dev
worker: worker:
hosts: hosts:
oas-dev stackspin-dev
path_classifiers:
library:
- "mitogen/compat"
- "ansible_mitogen/compat"
queries:
# Mitogen 2.4 compatibility trips this query everywhere, so just disable it
- exclude: py/unreachable-statement
- exclude: py/should-use-with
# mitogen.core.b() trips this query everywhere, so just disable it
- exclude: py/import-and-import-from
Copyright 2019, David Wilson
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
include LICENSE
# Mitogen
<!-- [![Build Status](https://travis-ci.org/dw/mitogen.png?branch=master)](https://travis-ci.org/dw/mitogen}) -->
<a href="https://mitogen.networkgenomics.com/">Please see the documentation</a>.
![](https://i.imgur.com/eBM6LhJ.gif)
[![Total alerts](https://img.shields.io/lgtm/alerts/g/dw/mitogen.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/dw/mitogen/alerts/)
[![Build Status](https://travis-ci.org/dw/mitogen.svg?branch=master)](https://travis-ci.org/dw/mitogen)
[![Pipelines Status](https://dev.azure.com/dw-mitogen/Mitogen/_apis/build/status/dw.mitogen?branchName=master)](https://dev.azure.com/dw-mitogen/Mitogen/_build/latest?definitionId=1?branchName=master)
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""
As Mitogen separates asynchronous IO out to a broker thread, communication
necessarily involves context switching and waking that thread. When application
threads and the broker share a CPU, this can be almost invisibly fast - around
25 microseconds for a full A->B->A round-trip.
However when threads are scheduled on different CPUs, round-trip delays
regularly vary wildly, and easily into milliseconds. Many contributing factors
exist, not least scenarios like:
1. A is preempted immediately after waking B, but before releasing the GIL.
2. B wakes from IO wait only to immediately enter futex wait.
3. A may wait 10ms or more for another timeslice, as the scheduler on its CPU
runs threads unrelated to its transaction (i.e. not B), wake only to release
its GIL, before entering IO sleep waiting for a reply from B, which cannot
exist yet.
4. B wakes, acquires GIL, performs work, and sends reply to A, causing it to
wake. B is preempted before releasing GIL.
5. A wakes from IO wait only to immediately enter futex wait.
6. B may wait 10ms or more for another timeslice, wake only to release its GIL,
before sleeping again.
7. A wakes, acquires GIL, finally receives reply.
Per above if we are unlucky, on an even moderately busy machine it is possible
to lose milliseconds just in scheduling delay, and the effect is compounded
when pairs of threads in process A are communicating with pairs of threads in
process B using the same scheme, such as when Ansible WorkerProcess is
communicating with ContextService in the connection multiplexer. In the worst
case it could involve 4 threads working in lockstep spread across 4 busy CPUs.
Since multithreading in Python is essentially useless except for waiting on IO
due to the presence of the GIL, at least in Ansible there is no good reason for
threads in the same process to run on distinct CPUs - they always operate in
lockstep due to the GIL, and are thus vulnerable to issues like above.
Linux lacks any natural API to describe what we want, it only permits
individual threads to be constrained to run on specific CPUs, and for that
constraint to be inherited by new threads and forks of the constrained thread.
This module therefore implements a CPU pinning policy for Ansible processes,
providing methods that should be called early in any new process, either to
rebalance which CPU it is pinned to, or in the case of subprocesses, to remove
the pinning entirely. It is likely to require ongoing tweaking, since pinning
necessarily involves preventing the scheduler from making load balancing
decisions.
"""
from __future__ import absolute_import
import ctypes
import logging
import mmap
import multiprocessing
import os
import struct
import mitogen.core
import mitogen.parent
LOG = logging.getLogger(__name__)
try:
_libc = ctypes.CDLL(None, use_errno=True)
_strerror = _libc.strerror
_strerror.restype = ctypes.c_char_p
_sem_init = _libc.sem_init
_sem_wait = _libc.sem_wait
_sem_post = _libc.sem_post
_sched_setaffinity = _libc.sched_setaffinity
except (OSError, AttributeError):
_libc = None
_strerror = None
_sem_init = None
_sem_wait = None
_sem_post = None
_sched_setaffinity = None
class sem_t(ctypes.Structure):
"""
Wrap sem_t to allow storing a lock in shared memory.
"""
_fields_ = [
('data', ctypes.c_uint8 * 128),
]
def init(self):
if _sem_init(self.data, 1, 1):
raise Exception(_strerror(ctypes.get_errno()))
def acquire(self):
if _sem_wait(self.data):
raise Exception(_strerror(ctypes.get_errno()))
def release(self):
if _sem_post(self.data):
raise Exception(_strerror(ctypes.get_errno()))
class State(ctypes.Structure):
"""
Contents of shared memory segment. This allows :meth:`Manager.assign` to be
called from any child, since affinity assignment must happen from within
the context of the new child process.
"""
_fields_ = [
('lock', sem_t),
('counter', ctypes.c_uint8),
]
class Policy(object):
"""
Process affinity policy.
"""
def assign_controller(self):
"""
Assign the Ansible top-level policy to this process.
"""
def assign_muxprocess(self, index):
"""
Assign the MuxProcess policy to this process.
"""
def assign_worker(self):
"""
Assign the WorkerProcess policy to this process.
"""
def assign_subprocess(self):
"""
Assign the helper subprocess policy to this process.
"""
class FixedPolicy(Policy):
"""
:class:`Policy` for machines where the only control method available is
fixed CPU placement. The scheme here was tested on an otherwise idle 16
thread machine.
- The connection multiplexer is pinned to CPU 0.
- The Ansible top-level (strategy) is pinned to CPU 1.
- WorkerProcesses are pinned sequentually to 2..N, wrapping around when no
more CPUs exist.
- Children such as SSH may be scheduled on any CPU except 0/1.
If the machine has less than 4 cores available, the top-level and workers
are pinned between CPU 2..N, i.e. no CPU is reserved for the top-level
process.
This could at least be improved by having workers pinned to independent
cores, before reusing the second hyperthread of an existing core.
A hook is installed that causes :meth:`reset` to run in the child of any
process created with :func:`mitogen.parent.popen`, ensuring CPU-intensive
children like SSH are not forced to share the same core as the (otherwise
potentially very busy) parent.
"""
def __init__(self, cpu_count=None):
#: For tests.
self.cpu_count = cpu_count or multiprocessing.cpu_count()
self.mem = mmap.mmap(-1, 4096)
self.state = State.from_buffer(self.mem)
self.state.lock.init()
if self.cpu_count < 2:
# uniprocessor
self._reserve_mux = False
self._reserve_controller = False
self._reserve_mask = 0
self._reserve_shift = 0
elif self.cpu_count < 4:
# small SMP
self._reserve_mux = True
self._reserve_controller = False
self._reserve_mask = 1
self._reserve_shift = 1
else:
# big SMP
self._reserve_mux = True
self._reserve_controller = True
self._reserve_mask = 3
self._reserve_shift = 2
def _set_affinity(self, descr, mask):
if descr:
LOG.debug('CPU mask for %s: %#08x', descr, mask)
mitogen.parent._preexec_hook = self._clear
self._set_cpu_mask(mask)
def _balance(self, descr):
self.state.lock.acquire()
try:
n = self.state.counter
self.state.counter += 1
finally:
self.state.lock.release()
self._set_cpu(descr, self._reserve_shift + (
(n % (self.cpu_count - self._reserve_shift))
))
def _set_cpu(self, descr, cpu):
self._set_affinity(descr, 1 << (cpu % self.cpu_count))
def _clear(self):
all_cpus = (1 << self.cpu_count) - 1
self._set_affinity(None, all_cpus & ~self._reserve_mask)
def assign_controller(self):
if self._reserve_controller:
self._set_cpu('Ansible top-level process', 1)
else:
self._balance('Ansible top-level process')
def assign_muxprocess(self, index):
self._set_cpu('MuxProcess %d' % (index,), index)
def assign_worker(self):
self._balance('WorkerProcess')
def assign_subprocess(self):
self._clear()
class LinuxPolicy(FixedPolicy):
def _mask_to_bytes(self, mask):
"""
Convert the (type long) mask to a cpu_set_t.
"""
chunks = []
shiftmask = (2 ** 64) - 1
for x in range(16):
chunks.append(struct.pack('<Q', mask & shiftmask))
mask >>= 64
return mitogen.core.b('').join(chunks)
def _get_thread_ids(self):
try:
ents = os.listdir('/proc/self/task')
except OSError:
LOG.debug('cannot fetch thread IDs for current process')
return [os.getpid()]
return [int(s) for s in ents if s.isdigit()]
def _set_cpu_mask(self, mask):
s = self._mask_to_bytes(mask)
for tid in self._get_thread_ids():
_sched_setaffinity(tid, len(s), s)
if _sched_setaffinity is not None:
policy = LinuxPolicy()
else:
policy = Policy()
r"""JSON (JavaScript Object Notation) <http://json.org> is a subset of
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
interchange format.
:mod:`simplejson` exposes an API familiar to users of the standard library
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
version of the :mod:`json` library contained in Python 2.6, but maintains
compatibility with Python 2.4 and Python 2.5 and (currently) has
significant performance advantages, even without using the optional C
extension for speedups.
Encoding basic Python object hierarchies::
>>> import simplejson as json
>>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
'["foo", {"bar": ["baz", null, 1.0, 2]}]'
>>> print json.dumps("\"foo\bar")
"\"foo\bar"
>>> print json.dumps(u'\u1234')
"\u1234"
>>> print json.dumps('\\')
"\\"
>>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
{"a": 0, "b": 0, "c": 0}
>>> from StringIO import StringIO
>>> io = StringIO()
>>> json.dump(['streaming API'], io)
>>> io.getvalue()
'["streaming API"]'
Compact encoding::
>>> import simplejson as json
>>> json.dumps([1,2,3,{'4': 5, '6': 7}], separators=(',',':'))
'[1,2,3,{"4":5,"6":7}]'
Pretty printing::
>>> import simplejson as json
>>> s = json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)
>>> print '\n'.join([l.rstrip() for l in s.splitlines()])
{
"4": 5,
"6": 7
}
Decoding JSON::
>>> import simplejson as json
>>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
>>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
True
>>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
True
>>> from StringIO import StringIO
>>> io = StringIO('["streaming API"]')
>>> json.load(io)[0] == 'streaming API'
True
Specializing JSON object decoding::
>>> import simplejson as json
>>> def as_complex(dct):
... if '__complex__' in dct:
... return complex(dct['real'], dct['imag'])
... return dct
...
>>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
... object_hook=as_complex)
(1+2j)
>>> import decimal
>>> json.loads('1.1', parse_float=decimal.Decimal) == decimal.Decimal('1.1')
True
Specializing JSON object encoding::
>>> import simplejson as json
>>> def encode_complex(obj):
... if isinstance(obj, complex):
... return [obj.real, obj.imag]
... raise TypeError(repr(o) + " is not JSON serializable")
...
>>> json.dumps(2 + 1j, default=encode_complex)
'[2.0, 1.0]'
>>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
'[2.0, 1.0]'
>>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
'[2.0, 1.0]'
Using simplejson.tool from the shell to validate and pretty-print::
$ echo '{"json":"obj"}' | python -m simplejson.tool
{
"json": "obj"
}
$ echo '{ 1.2:3.4}' | python -m simplejson.tool
Expecting property name: line 1 column 2 (char 2)
"""
__version__ = '2.0.9'
__all__ = [
'dump', 'dumps', 'load', 'loads',
'JSONDecoder', 'JSONEncoder',
]
__author__ = 'Bob Ippolito <bob@redivi.com>'
from decoder import JSONDecoder
from encoder import JSONEncoder
_default_encoder = JSONEncoder(
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
encoding='utf-8',
default=None,
)
def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, **kw):
"""Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
``.write()``-supporting file-like object).
If ``skipkeys`` is true then ``dict`` keys that are not basic types
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
will be skipped instead of raising a ``TypeError``.
If ``ensure_ascii`` is false, then the some chunks written to ``fp``
may be ``unicode`` instances, subject to normal Python ``str`` to
``unicode`` coercion rules. Unless ``fp.write()`` explicitly
understands ``unicode`` (as in ``codecs.getwriter()``) this is likely
to cause an error.
If ``check_circular`` is false, then the circular reference check
for container types will be skipped and a circular reference will
result in an ``OverflowError`` (or worse).
If ``allow_nan`` is false, then it will be a ``ValueError`` to
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
in strict compliance of the JSON specification, instead of using the
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
If ``indent`` is a non-negative integer, then JSON array elements and object
members will be pretty-printed with that indent level. An indent level
of 0 will only insert newlines. ``None`` is the most compact representation.
If ``separators`` is an ``(item_separator, dict_separator)`` tuple
then it will be used instead of the default ``(', ', ': ')`` separators.
``(',', ':')`` is the most compact JSON representation.
``encoding`` is the character encoding for str instances, default is UTF-8.
``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
``.default()`` method to serialize additional types), specify it with
the ``cls`` kwarg.
"""
# cached encoder
if (not skipkeys and ensure_ascii and
check_circular and allow_nan and
cls is None and indent is None and separators is None and
encoding == 'utf-8' and default is None and not kw):
iterable = _default_encoder.iterencode(obj)
else:
if cls is None:
cls = JSONEncoder
iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
separators=separators, encoding=encoding,
default=default, **kw).iterencode(obj)
# could accelerate with writelines in some versions of Python, at
# a debuggability cost
for chunk in iterable:
fp.write(chunk)
def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, **kw):
"""Serialize ``obj`` to a JSON formatted ``str``.
If ``skipkeys`` is false then ``dict`` keys that are not basic types
(``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
will be skipped instead of raising a ``TypeError``.
If ``ensure_ascii`` is false, then the return value will be a
``unicode`` instance subject to normal Python ``str`` to ``unicode``
coercion rules instead of being escaped to an ASCII ``str``.
If ``check_circular`` is false, then the circular reference check
for container types will be skipped and a circular reference will
result in an ``OverflowError`` (or worse).
If ``allow_nan`` is false, then it will be a ``ValueError`` to
serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
strict compliance of the JSON specification, instead of using the
JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).
If ``indent`` is a non-negative integer, then JSON array elements and
object members will be pretty-printed with that indent level. An indent
level of 0 will only insert newlines. ``None`` is the most compact
representation.
If ``separators`` is an ``(item_separator, dict_separator)`` tuple
then it will be used instead of the default ``(', ', ': ')`` separators.
``(',', ':')`` is the most compact JSON representation.
``encoding`` is the character encoding for str instances, default is UTF-8.
``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.
To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
``.default()`` method to serialize additional types), specify it with
the ``cls`` kwarg.
"""
# cached encoder
if (not skipkeys and ensure_ascii and
check_circular and allow_nan and
cls is None and indent is None and separators is None and
encoding == 'utf-8' and default is None and not kw):
return _default_encoder.encode(obj)
if cls is None:
cls = JSONEncoder
return cls(
skipkeys=skipkeys, ensure_ascii=ensure_ascii,
check_circular=check_circular, allow_nan=allow_nan, indent=indent,
separators=separators, encoding=encoding, default=default,
**kw).encode(obj)
_default_decoder = JSONDecoder(encoding=None, object_hook=None)
def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, **kw):
"""Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
a JSON document) to a Python object.
If the contents of ``fp`` is encoded with an ASCII based encoding other
than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must
be specified. Encodings that are not ASCII based (such as UCS-2) are
not allowed, and should be wrapped with
``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode``
object and passed to ``loads()``
``object_hook`` is an optional function that will be called with the
result of any object literal decode (a ``dict``). The return value of
``object_hook`` will be used instead of the ``dict``. This feature
can be used to implement custom decoders (e.g. JSON-RPC class hinting).
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
kwarg.
"""
return loads(fp.read(),
encoding=encoding, cls=cls, object_hook=object_hook,
parse_float=parse_float, parse_int=parse_int,
parse_constant=parse_constant, **kw)
def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, **kw):
"""Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
document) to a Python object.
If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
must be specified. Encodings that are not ASCII based (such as UCS-2)
are not allowed and should be decoded to ``unicode`` first.
``object_hook`` is an optional function that will be called with the
result of any object literal decode (a ``dict``). The return value of
``object_hook`` will be used instead of the ``dict``. This feature
can be used to implement custom decoders (e.g. JSON-RPC class hinting).
``parse_float``, if specified, will be called with the string
of every JSON float to be decoded. By default this is equivalent to
float(num_str). This can be used to use another datatype or parser
for JSON floats (e.g. decimal.Decimal).
``parse_int``, if specified, will be called with the string
of every JSON int to be decoded. By default this is equivalent to
int(num_str). This can be used to use another datatype or parser
for JSON integers (e.g. float).
``parse_constant``, if specified, will be called with one of the
following strings: -Infinity, Infinity, NaN, null, true, false.
This can be used to raise an exception if invalid JSON numbers
are encountered.
To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
kwarg.
"""
if (cls is None and encoding is None and object_hook is None and
parse_int is None and parse_float is None and
parse_constant is None and not kw):
return _default_decoder.decode(s)
if cls is None:
cls = JSONDecoder
if object_hook is not None:
kw['object_hook'] = object_hook
if parse_float is not None:
kw['parse_float'] = parse_float
if parse_int is not None:
kw['parse_int'] = parse_int
if parse_constant is not None:
kw['parse_constant'] = parse_constant
return cls(encoding=encoding, **kw).decode(s)
"""Implementation of JSONDecoder
"""
import re
import sys
import struct
from simplejson.scanner import make_scanner
try:
from simplejson._speedups import scanstring as c_scanstring
except ImportError:
c_scanstring = None
__all__ = ['JSONDecoder']
FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL
def _floatconstants():
_BYTES = '7FF80000000000007FF0000000000000'.decode('hex')
if sys.byteorder != 'big':
_BYTES = _BYTES[:8][::-1] + _BYTES[8:][::-1]
nan, inf = struct.unpack('dd', _BYTES)
return nan, inf, -inf
NaN, PosInf, NegInf = _floatconstants()
def linecol(doc, pos):
lineno = doc.count('\n', 0, pos) + 1
if lineno == 1:
colno = pos
else:
colno = pos - doc.rindex('\n', 0, pos)
return lineno, colno
def errmsg(msg, doc, pos, end=None):
# Note that this function is called from _speedups
lineno, colno = linecol(doc, pos)
if end is None:
#fmt = '{0}: line {1} column {2} (char {3})'
#return fmt.format(msg, lineno, colno, pos)
fmt = '%s: line %d column %d (char %d)'
return fmt % (msg, lineno, colno, pos)
endlineno, endcolno = linecol(doc, end)
#fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})'
#return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end)
fmt = '%s: line %d column %d - line %d column %d (char %d - %d)'
return fmt % (msg, lineno, colno, endlineno, endcolno, pos, end)
_CONSTANTS = {
'-Infinity': NegInf,
'Infinity': PosInf,
'NaN': NaN,
}
STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
BACKSLASH = {
'"': u'"', '\\': u'\\', '/': u'/',
'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
}
DEFAULT_ENCODING = "utf-8"
def py_scanstring(s, end, encoding=None, strict=True, _b=BACKSLASH, _m=STRINGCHUNK.match):
"""Scan the string s for a JSON string. End is the index of the
character in s after the quote that started the JSON string.
Unescapes all valid JSON string escape sequences and raises ValueError
on attempt to decode an invalid string. If strict is False then literal
control characters are allowed in the string.
Returns a tuple of the decoded string and the index of the character in s
after the end quote."""
if encoding is None:
encoding = DEFAULT_ENCODING
chunks = []
_append = chunks.append
begin = end - 1
while 1:
chunk = _m(s, end)
if chunk is None:
raise ValueError(
errmsg("Unterminated string starting at", s, begin))
end = chunk.end()
content, terminator = chunk.groups()
# Content is contains zero or more unescaped string characters
if content:
if not isinstance(content, unicode):
content = unicode(content, encoding)
_append(content)
# Terminator is the end of string, a literal control character,
# or a backslash denoting that an escape sequence follows
if terminator == '"':
break
elif terminator != '\\':
if strict:
msg = "Invalid control character %r at" % (terminator,)
#msg = "Invalid control character {0!r} at".format(terminator)
raise ValueError(errmsg(msg, s, end))
else:
_append(terminator)
continue
try:
esc = s[end]
except IndexError:
raise ValueError(
errmsg("Unterminated string starting at", s, begin))
# If not a unicode escape sequence, must be in the lookup table
if esc != 'u':
try:
char = _b[esc]
except KeyError:
msg = "Invalid \\escape: " + repr(esc)
raise ValueError(errmsg(msg, s, end))
end += 1
else:
# Unicode escape sequence
esc = s[end + 1:end + 5]
next_end = end + 5
if len(esc) != 4:
msg = "Invalid \\uXXXX escape"
raise ValueError(errmsg(msg, s, end))
uni = int(esc, 16)
# Check for surrogate pair on UCS-4 systems
if 0xd800 <= uni <= 0xdbff and sys.maxunicode > 65535:
msg = "Invalid \\uXXXX\\uXXXX surrogate pair"
if not s[end + 5:end + 7] == '\\u':
raise ValueError(errmsg(msg, s, end))
esc2 = s[end + 7:end + 11]
if len(esc2) != 4:
raise ValueError(errmsg(msg, s, end))
uni2 = int(esc2, 16)
uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00))
next_end += 6
char = unichr(uni)
end = next_end
# Append the unescaped character
_append(char)
return u''.join(chunks), end
# Use speedup if available
scanstring = c_scanstring or py_scanstring
WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
WHITESPACE_STR = ' \t\n\r'
def JSONObject((s, end), encoding, strict, scan_once, object_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
pairs = {}
# Use a slice to prevent IndexError from being raised, the following
# check will raise a more specific ValueError if the string is empty
nextchar = s[end:end + 1]
# Normally we expect nextchar == '"'
if nextchar != '"':
if nextchar in _ws:
end = _w(s, end).end()
nextchar = s[end:end + 1]
# Trivial empty object
if nextchar == '}':
return pairs, end + 1
elif nextchar != '"':
raise ValueError(errmsg("Expecting property name", s, end))
end += 1
while True:
key, end = scanstring(s, end, encoding, strict)
# To skip some function call overhead we optimize the fast paths where
# the JSON key separator is ": " or just ":".
if s[end:end + 1] != ':':
end = _w(s, end).end()
if s[end:end + 1] != ':':
raise ValueError(errmsg("Expecting : delimiter", s, end))
end += 1
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
try:
value, end = scan_once(s, end)
except StopIteration:
raise ValueError(errmsg("Expecting object", s, end))
pairs[key] = value
try:
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar == '}':
break
elif nextchar != ',':
raise ValueError(errmsg("Expecting , delimiter", s, end - 1))
try:
nextchar = s[end]
if nextchar in _ws:
end += 1
nextchar = s[end]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end]
except IndexError:
nextchar = ''
end += 1
if nextchar != '"':
raise ValueError(errmsg("Expecting property name", s, end - 1))
if object_hook is not None:
pairs = object_hook(pairs)
return pairs, end
def JSONArray((s, end), scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
values = []
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
# Look-ahead for trivial empty array
if nextchar == ']':
return values, end + 1
_append = values.append
while True:
try:
value, end = scan_once(s, end)
except StopIteration:
raise ValueError(errmsg("Expecting object", s, end))
_append(value)
nextchar = s[end:end + 1]
if nextchar in _ws:
end = _w(s, end + 1).end()
nextchar = s[end:end + 1]
end += 1
if nextchar == ']':
break
elif nextchar != ',':
raise ValueError(errmsg("Expecting , delimiter", s, end))
try:
if s[end] in _ws:
end += 1
if s[end] in _ws:
end = _w(s, end + 1).end()
except IndexError:
pass
return values, end
class JSONDecoder(object):
"""Simple JSON <http://json.org> decoder
Performs the following translations in decoding by default:
+---------------+-------------------+
| JSON | Python |
+===============+===================+
| object | dict |
+---------------+-------------------+
| array | list |
+---------------+-------------------+
| string | unicode |
+---------------+-------------------+
| number (int) | int, long |
+---------------+-------------------+
| number (real) | float |
+---------------+-------------------+
| true | True |
+---------------+-------------------+
| false | False |
+---------------+-------------------+
| null | None |
+---------------+-------------------+
It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
their corresponding ``float`` values, which is outside the JSON spec.
"""
def __init__(self, encoding=None, object_hook=None, parse_float=None,
parse_int=None, parse_constant=None, strict=True):
"""``encoding`` determines the encoding used to interpret any ``str``
objects decoded by this instance (utf-8 by default). It has no
effect when decoding ``unicode`` objects.
Note that currently only encodings that are a superset of ASCII work,
strings of other encodings should be passed in as ``unicode``.
``object_hook``, if specified, will be called with the result
of every JSON object decoded and its return value will be used in
place of the given ``dict``. This can be used to provide custom
deserializations (e.g. to support JSON-RPC class hinting).
``parse_float``, if specified, will be called with the string
of every JSON float to be decoded. By default this is equivalent to
float(num_str). This can be used to use another datatype or parser
for JSON floats (e.g. decimal.Decimal).
``parse_int``, if specified, will be called with the string
of every JSON int to be decoded. By default this is equivalent to
int(num_str). This can be used to use another datatype or parser
for JSON integers (e.g. float).
``parse_constant``, if specified, will be called with one of the
following strings: -Infinity, Infinity, NaN.
This can be used to raise an exception if invalid JSON numbers
are encountered.
"""
self.encoding = encoding
self.object_hook = object_hook
self.parse_float = parse_float or float
self.parse_int = parse_int or int
self.parse_constant = parse_constant or _CONSTANTS.__getitem__
self.strict = strict
self.parse_object = JSONObject
self.parse_array = JSONArray
self.parse_string = scanstring
self.scan_once = make_scanner(self)
def decode(self, s, _w=WHITESPACE.match):
"""Return the Python representation of ``s`` (a ``str`` or ``unicode``
instance containing a JSON document)
"""
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
end = _w(s, end).end()
if end != len(s):
raise ValueError(errmsg("Extra data", s, end, len(s)))
return obj
def raw_decode(self, s, idx=0):
"""Decode a JSON document from ``s`` (a ``str`` or ``unicode`` beginning
with a JSON document) and return a 2-tuple of the Python
representation and the index in ``s`` where the document ended.
This can be used to decode a JSON document from a string that may
have extraneous data at the end.
"""
try:
obj, end = self.scan_once(s, idx)
except StopIteration:
raise ValueError("No JSON object could be decoded")
return obj, end
"""Implementation of JSONEncoder
"""
import re
try:
from simplejson._speedups import encode_basestring_ascii as c_encode_basestring_ascii
except ImportError:
c_encode_basestring_ascii = None
try:
from simplejson._speedups import make_encoder as c_make_encoder
except ImportError:
c_make_encoder = None
ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]')
ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
HAS_UTF8 = re.compile(r'[\x80-\xff]')
ESCAPE_DCT = {
'\\': '\\\\',
'"': '\\"',
'\b': '\\b',
'\f': '\\f',
'\n': '\\n',
'\r': '\\r',
'\t': '\\t',
}
for i in range(0x20):
#ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i))
ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))
# Assume this produces an infinity on all machines (probably not guaranteed)
INFINITY = float('1e66666')
FLOAT_REPR = repr
def encode_basestring(s):
"""Return a JSON representation of a Python string
"""
def replace(match):
return ESCAPE_DCT[match.group(0)]
return '"' + ESCAPE.sub(replace, s) + '"'
def py_encode_basestring_ascii(s):
"""Return an ASCII-only JSON representation of a Python string
"""
if isinstance(s, str) and HAS_UTF8.search(s) is not None:
s = s.decode('utf-8')
def replace(match):
s = match.group(0)
try:
return ESCAPE_DCT[s]
except KeyError:
n = ord(s)
if n < 0x10000:
#return '\\u{0:04x}'.format(n)
return '\\u%04x' % (n,)
else:
# surrogate pair
n -= 0x10000
s1 = 0xd800 | ((n >> 10) & 0x3ff)
s2 = 0xdc00 | (n & 0x3ff)
#return '\\u{0:04x}\\u{1:04x}'.format(s1, s2)
return '\\u%04x\\u%04x' % (s1, s2)
return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'
encode_basestring_ascii = c_encode_basestring_ascii or py_encode_basestring_ascii
class JSONEncoder(object):
"""Extensible JSON <http://json.org> encoder for Python data structures.
Supports the following objects and types by default:
+-------------------+---------------+
| Python | JSON |
+===================+===============+
| dict | object |
+-------------------+---------------+
| list, tuple | array |
+-------------------+---------------+
| str, unicode | string |
+-------------------+---------------+
| int, long, float | number |
+-------------------+---------------+
| True | true |
+-------------------+---------------+
| False | false |
+-------------------+---------------+
| None | null |
+-------------------+---------------+
To extend this to recognize other objects, subclass and implement a
``.default()`` method with another method that returns a serializable
object for ``o`` if possible, otherwise it should call the superclass
implementation (to raise ``TypeError``).
"""
item_separator = ', '
key_separator = ': '
def __init__(self, skipkeys=False, ensure_ascii=True,
check_circular=True, allow_nan=True, sort_keys=False,
indent=None, separators=None, encoding='utf-8', default=None):
"""Constructor for JSONEncoder, with sensible defaults.
If skipkeys is false, then it is a TypeError to attempt
encoding of keys that are not str, int, long, float or None. If
skipkeys is True, such items are simply skipped.
If ensure_ascii is true, the output is guaranteed to be str
objects with all incoming unicode characters escaped. If
ensure_ascii is false, the output will be unicode object.
If check_circular is true, then lists, dicts, and custom encoded
objects will be checked for circular references during encoding to
prevent an infinite recursion (which would cause an OverflowError).
Otherwise, no such check takes place.
If allow_nan is true, then NaN, Infinity, and -Infinity will be
encoded as such. This behavior is not JSON specification compliant,
but is consistent with most JavaScript based encoders and decoders.
Otherwise, it will be a ValueError to encode such floats.
If sort_keys is true, then the output of dictionaries will be
sorted by key; this is useful for regression tests to ensure
that JSON serializations can be compared on a day-to-day basis.
If indent is a non-negative integer, then JSON array
elements and object members will be pretty-printed with that
indent level. An indent level of 0 will only insert newlines.
None is the most compact representation.
If specified, separators should be a (item_separator, key_separator)
tuple. The default is (', ', ': '). To get the most compact JSON
representation you should specify (',', ':') to eliminate whitespace.
If specified, default is a function that gets called for objects
that can't otherwise be serialized. It should return a JSON encodable
version of the object or raise a ``TypeError``.
If encoding is not None, then all input strings will be
transformed into unicode using that encoding prior to JSON-encoding.
The default is UTF-8.
"""
self.skipkeys = skipkeys
self.ensure_ascii = ensure_ascii
self.check_circular = check_circular
self.allow_nan = allow_nan
self.sort_keys = sort_keys
self.indent = indent
if separators is not None:
self.item_separator, self.key_separator = separators
if default is not None:
self.default = default
self.encoding = encoding
def default(self, o):
"""Implement this method in a subclass such that it returns
a serializable object for ``o``, or calls the base implementation
(to raise a ``TypeError``).
For example, to support arbitrary iterators, you could
implement default like this::
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
return JSONEncoder.default(self, o)
"""
raise TypeError(repr(o) + " is not JSON serializable")
def encode(self, o):
"""Return a JSON string representation of a Python data structure.
>>> JSONEncoder().encode({"foo": ["bar", "baz"]})
'{"foo": ["bar", "baz"]}'
"""
# This is for extremely simple cases and benchmarks.
if isinstance(o, basestring):
if isinstance(o, str):
_encoding = self.encoding
if (_encoding is not None
and not (_encoding == 'utf-8')):
o = o.decode(_encoding)
if self.ensure_ascii:
return encode_basestring_ascii(o)
else:
return encode_basestring(o)
# This doesn't pass the iterator directly to ''.join() because the
# exceptions aren't as detailed. The list call should be roughly
# equivalent to the PySequence_Fast that ''.join() would do.
chunks = self.iterencode(o, _one_shot=True)
if not isinstance(chunks, (list, tuple)):
chunks = list(chunks)
return ''.join(chunks)
def iterencode(self, o, _one_shot=False):
"""Encode the given object and yield each string
representation as available.
For example::
for chunk in JSONEncoder().iterencode(bigobject):
mysocket.write(chunk)
"""
if self.check_circular:
markers = {}
else:
markers = None
if self.ensure_ascii:
_encoder = encode_basestring_ascii
else:
_encoder = encode_basestring
if self.encoding != 'utf-8':
def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding):
if isinstance(o, str):
o = o.decode(_encoding)
return _orig_encoder(o)
def floatstr(o, allow_nan=self.allow_nan, _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY):
# Check for specials. Note that this type of test is processor- and/or
# platform-specific, so do tests which don't depend on the internals.
if o != o:
text = 'NaN'
elif o == _inf:
text = 'Infinity'
elif o == _neginf:
text = '-Infinity'
else:
return _repr(o)
if not allow_nan:
raise ValueError(
"Out of range float values are not JSON compliant: " +
repr(o))
return text
if _one_shot and c_make_encoder is not None and not self.indent and not self.sort_keys:
_iterencode = c_make_encoder(
markers, self.default, _encoder, self.indent,
self.key_separator, self.item_separator, self.sort_keys,
self.skipkeys, self.allow_nan)
else:
_iterencode = _make_iterencode(
markers, self.default, _encoder, self.indent, floatstr,
self.key_separator, self.item_separator, self.sort_keys,
self.skipkeys, _one_shot)
return _iterencode(o, 0)
def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot,
## HACK: hand-optimized bytecode; turn globals into locals
False=False,
True=True,
ValueError=ValueError,
basestring=basestring,
dict=dict,
float=float,
id=id,
int=int,
isinstance=isinstance,
list=list,
long=long,
str=str,
tuple=tuple,
):
def _iterencode_list(lst, _current_indent_level):
if not lst:
yield '[]'
return
if markers is not None:
markerid = id(lst)
if markerid in markers:
raise ValueError("Circular reference detected")
markers[markerid] = lst
buf = '['
if _indent is not None:
_current_indent_level += 1
newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
separator = _item_separator + newline_indent
buf += newline_indent
else:
newline_indent = None
separator = _item_separator
first = True
for value in lst:
if first:
first = False
else:
buf = separator
if isinstance(value, basestring):
yield buf + _encoder(value)
elif value is None:
yield buf + 'null'
elif value is True:
yield buf + 'true'
elif value is False:
yield buf + 'false'
elif isinstance(value, (int, long)):
yield buf + str(value)
elif isinstance(value, float):
yield buf + _floatstr(value)
else:
yield buf
if isinstance(value, (list, tuple)):
chunks = _iterencode_list(value, _current_indent_level)
elif isinstance(value, dict):
chunks = _iterencode_dict(value, _current_indent_level)
else:
chunks = _iterencode(value, _current_indent_level)
for chunk in chunks:
yield chunk
if newline_indent is not None:
_current_indent_level -= 1
yield '\n' + (' ' * (_indent * _current_indent_level))
yield ']'
if markers is not None:
del markers[markerid]
def _iterencode_dict(dct, _current_indent_level):
if not dct:
yield '{}'
return
if markers is not None:
markerid = id(dct)
if markerid in markers:
raise ValueError("Circular reference detected")
markers[markerid] = dct
yield '{'
if _indent is not None:
_current_indent_level += 1
newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
item_separator = _item_separator + newline_indent
yield newline_indent
else:
newline_indent = None
item_separator = _item_separator
first = True
if _sort_keys:
items = dct.items()
items.sort(key=lambda kv: kv[0])
else:
items = dct.iteritems()
for key, value in items:
if isinstance(key, basestring):
pass
# JavaScript is weakly typed for these, so it makes sense to
# also allow them. Many encoders seem to do something like this.
elif isinstance(key, float):
key = _floatstr(key)
elif key is True:
key = 'true'
elif key is False:
key = 'false'
elif key is None:
key = 'null'
elif isinstance(key, (int, long)):
key = str(key)
elif _skipkeys:
continue
else:
raise TypeError("key " + repr(key) + " is not a string")
if first:
first = False
else:
yield item_separator
yield _encoder(key)
yield _key_separator
if isinstance(value, basestring):
yield _encoder(value)
elif value is None:
yield 'null'
elif value is True:
yield 'true'
elif value is False:
yield 'false'
elif isinstance(value, (int, long)):
yield str(value)
elif isinstance(value, float):
yield _floatstr(value)
else:
if isinstance(value, (list, tuple)):
chunks = _iterencode_list(value, _current_indent_level)
elif isinstance(value, dict):
chunks = _iterencode_dict(value, _current_indent_level)
else:
chunks = _iterencode(value, _current_indent_level)
for chunk in chunks:
yield chunk
if newline_indent is not None:
_current_indent_level -= 1
yield '\n' + (' ' * (_indent * _current_indent_level))
yield '}'
if markers is not None:
del markers[markerid]
def _iterencode(o, _current_indent_level):
if isinstance(o, basestring):
yield _encoder(o)
elif o is None:
yield 'null'
elif o is True:
yield 'true'
elif o is False:
yield 'false'
elif isinstance(o, (int, long)):
yield str(o)
elif isinstance(o, float):
yield _floatstr(o)
elif isinstance(o, (list, tuple)):
for chunk in _iterencode_list(o, _current_indent_level):
yield chunk
elif isinstance(o, dict):
for chunk in _iterencode_dict(o, _current_indent_level):
yield chunk
else:
if markers is not None:
markerid = id(o)
if markerid in markers:
raise ValueError("Circular reference detected")
markers[markerid] = o
o = _default(o)
for chunk in _iterencode(o, _current_indent_level):
yield chunk
if markers is not None:
del markers[markerid]
return _iterencode
"""JSON token scanner
"""
import re
try:
from simplejson._speedups import make_scanner as c_make_scanner
except ImportError:
c_make_scanner = None
__all__ = ['make_scanner']
NUMBER_RE = re.compile(
r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?',
(re.VERBOSE | re.MULTILINE | re.DOTALL))
def py_make_scanner(context):
parse_object = context.parse_object
parse_array = context.parse_array
parse_string = context.parse_string
match_number = NUMBER_RE.match
encoding = context.encoding
strict = context.strict
parse_float = context.parse_float
parse_int = context.parse_int
parse_constant = context.parse_constant
object_hook = context.object_hook
def _scan_once(string, idx):
try:
nextchar = string[idx]
except IndexError:
raise StopIteration
if nextchar == '"':
return parse_string(string, idx + 1, encoding, strict)
elif nextchar == '{':
return parse_object((string, idx + 1), encoding, strict, _scan_once, object_hook)
elif nextchar == '[':
return parse_array((string, idx + 1), _scan_once)
elif nextchar == 'n' and string[idx:idx + 4] == 'null':
return None, idx + 4
elif nextchar == 't' and string[idx:idx + 4] == 'true':
return True, idx + 4
elif nextchar == 'f' and string[idx:idx + 5] == 'false':
return False, idx + 5
m = match_number(string, idx)
if m is not None:
integer, frac, exp = m.groups()
if frac or exp:
res = parse_float(integer + (frac or '') + (exp or ''))
else:
res = parse_int(integer)
return res, m.end()
elif nextchar == 'N' and string[idx:idx + 3] == 'NaN':
return parse_constant('NaN'), idx + 3
elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity':
return parse_constant('Infinity'), idx + 8
elif nextchar == '-' and string[idx:idx + 9] == '-Infinity':
return parse_constant('-Infinity'), idx + 9
else:
raise StopIteration
return _scan_once
make_scanner = c_make_scanner or py_make_scanner
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import
from __future__ import unicode_literals
import errno
import logging
import os
import pprint
import stat
import sys
import time
import ansible.constants as C
import ansible.errors
import ansible.plugins.connection
import ansible.utils.shlex
import mitogen.core
import mitogen.fork
import mitogen.utils
import ansible_mitogen.mixins
import ansible_mitogen.parsing
import ansible_mitogen.process
import ansible_mitogen.services
import ansible_mitogen.target
import ansible_mitogen.transport_config
LOG = logging.getLogger(__name__)
task_vars_msg = (
'could not recover task_vars. This means some connection '
'settings may erroneously be reset to their defaults. '
'Please report a bug if you encounter this message.'
)
def get_remote_name(spec):
"""
Return the value to use for the "remote_name" parameter.
"""
if spec.mitogen_mask_remote_name():
return 'ansible'
return None
def optional_int(value):
"""
Convert `value` to an integer if it is not :data:`None`, otherwise return
:data:`None`.
"""
try:
return int(value)
except (TypeError, ValueError):
return None
def convert_bool(obj):
if isinstance(obj, bool):
return obj
if str(obj).lower() in ('no', 'false', '0'):
return False
if str(obj).lower() not in ('yes', 'true', '1'):
raise ansible.errors.AnsibleConnectionFailure(
'expected yes/no/true/false/0/1, got %r' % (obj,)
)
return True
def default(value, default):
"""
Return `default` is `value` is :data:`None`, otherwise return `value`.
"""
if value is None:
return default
return value
def _connect_local(spec):
"""
Return ContextService arguments for a local connection.
"""
return {
'method': 'local',
'kwargs': {
'python_path': spec.python_path(),
}
}
def _connect_ssh(spec):
"""
Return ContextService arguments for an SSH connection.
"""
if C.HOST_KEY_CHECKING:
check_host_keys = 'enforce'
else:
check_host_keys = 'ignore'
# #334: tilde-expand private_key_file to avoid implementation difference
# between Python and OpenSSH.
private_key_file = spec.private_key_file()
if private_key_file is not None:
private_key_file = os.path.expanduser(private_key_file)
return {
'method': 'ssh',
'kwargs': {
'check_host_keys': check_host_keys,
'hostname': spec.remote_addr(),
'username': spec.remote_user(),
'compression': convert_bool(
default(spec.mitogen_ssh_compression(), True)
),
'password': spec.password(),
'port': spec.port(),
'python_path': spec.python_path(),
'identity_file': private_key_file,
'identities_only': False,
'ssh_path': spec.ssh_executable(),
'connect_timeout': spec.ansible_ssh_timeout(),
'ssh_args': spec.ssh_args(),
'ssh_debug_level': spec.mitogen_ssh_debug_level(),
'remote_name': get_remote_name(spec),
'keepalive_count': (
spec.mitogen_ssh_keepalive_count() or 10
),
'keepalive_interval': (
spec.mitogen_ssh_keepalive_interval() or 30
),
}
}
def _connect_buildah(spec):
"""
Return ContextService arguments for a Buildah connection.
"""
return {
'method': 'buildah',
'kwargs': {
'username': spec.remote_user(),
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_docker(spec):
"""
Return ContextService arguments for a Docker connection.
"""
return {
'method': 'docker',
'kwargs': {
'username': spec.remote_user(),
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_kubectl(spec):
"""
Return ContextService arguments for a Kubernetes connection.
"""
return {
'method': 'kubectl',
'kwargs': {
'pod': spec.remote_addr(),
'python_path': spec.python_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'kubectl_path': spec.mitogen_kubectl_path(),
'kubectl_args': spec.extra_args(),
'remote_name': get_remote_name(spec),
}
}
def _connect_jail(spec):
"""
Return ContextService arguments for a FreeBSD jail connection.
"""
return {
'method': 'jail',
'kwargs': {
'username': spec.remote_user(),
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_lxc(spec):
"""
Return ContextService arguments for an LXC Classic container connection.
"""
return {
'method': 'lxc',
'kwargs': {
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'lxc_attach_path': spec.mitogen_lxc_attach_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_lxd(spec):
"""
Return ContextService arguments for an LXD container connection.
"""
return {
'method': 'lxd',
'kwargs': {
'container': spec.remote_addr(),
'python_path': spec.python_path(),
'lxc_path': spec.mitogen_lxc_path(),
'connect_timeout': spec.ansible_ssh_timeout() or spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_machinectl(spec):
"""
Return ContextService arguments for a machinectl connection.
"""
return _connect_setns(spec, kind='machinectl')
def _connect_setns(spec, kind=None):
"""
Return ContextService arguments for a mitogen_setns connection.
"""
return {
'method': 'setns',
'kwargs': {
'container': spec.remote_addr(),
'username': spec.remote_user(),
'python_path': spec.python_path(),
'kind': kind or spec.mitogen_kind(),
'docker_path': spec.mitogen_docker_path(),
'lxc_path': spec.mitogen_lxc_path(),
'lxc_info_path': spec.mitogen_lxc_info_path(),
'machinectl_path': spec.mitogen_machinectl_path(),
}
}
def _connect_su(spec):
"""
Return ContextService arguments for su as a become method.
"""
return {
'method': 'su',
'enable_lru': True,
'kwargs': {
'username': spec.become_user(),
'password': spec.become_pass(),
'python_path': spec.python_path(),
'su_path': spec.become_exe(),
'connect_timeout': spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_sudo(spec):
"""
Return ContextService arguments for sudo as a become method.
"""
return {
'method': 'sudo',
'enable_lru': True,
'kwargs': {
'username': spec.become_user(),
'password': spec.become_pass(),
'python_path': spec.python_path(),
'sudo_path': spec.become_exe(),
'connect_timeout': spec.timeout(),
'sudo_args': spec.sudo_args(),
'remote_name': get_remote_name(spec),
}
}
def _connect_doas(spec):
"""
Return ContextService arguments for doas as a become method.
"""
return {
'method': 'doas',
'enable_lru': True,
'kwargs': {
'username': spec.become_user(),
'password': spec.become_pass(),
'python_path': spec.python_path(),
'doas_path': spec.become_exe(),
'connect_timeout': spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_mitogen_su(spec):
"""
Return ContextService arguments for su as a first class connection.
"""
return {
'method': 'su',
'kwargs': {
'username': spec.remote_user(),
'password': spec.password(),
'python_path': spec.python_path(),
'su_path': spec.become_exe(),
'connect_timeout': spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
def _connect_mitogen_sudo(spec):
"""
Return ContextService arguments for sudo as a first class connection.
"""
return {
'method': 'sudo',
'kwargs': {
'username': spec.remote_user(),
'password': spec.password(),
'python_path': spec.python_path(),
'sudo_path': spec.become_exe(),
'connect_timeout': spec.timeout(),
'sudo_args': spec.sudo_args(),
'remote_name': get_remote_name(spec),
}
}
def _connect_mitogen_doas(spec):
"""
Return ContextService arguments for doas as a first class connection.
"""
return {
'method': 'doas',
'kwargs': {
'username': spec.remote_user(),
'password': spec.password(),
'python_path': spec.python_path(),
'doas_path': spec.ansible_doas_exe(),
'connect_timeout': spec.timeout(),
'remote_name': get_remote_name(spec),
}
}
#: Mapping of connection method names to functions invoked as `func(spec)`
#: generating ContextService keyword arguments matching a connection
#: specification.
CONNECTION_METHOD = {
'buildah': _connect_buildah,
'docker': _connect_docker,
'kubectl': _connect_kubectl,
'jail': _connect_jail,
'local': _connect_local,
'lxc': _connect_lxc,
'lxd': _connect_lxd,
'machinectl': _connect_machinectl,
'setns': _connect_setns,
'ssh': _connect_ssh,
'smart': _connect_ssh, # issue #548.
'su': _connect_su,
'sudo': _connect_sudo,
'doas': _connect_doas,
'mitogen_su': _connect_mitogen_su,
'mitogen_sudo': _connect_mitogen_sudo,
'mitogen_doas': _connect_mitogen_doas,
}
class CallChain(mitogen.parent.CallChain):
"""
Extend :class:`mitogen.parent.CallChain` to additionally cause the
associated :class:`Connection` to be reset if a ChannelError occurs.
This only catches failures that occur while a call is pending, it is a
stop-gap until a more general method is available to notice connection in
every situation.
"""
call_aborted_msg = (
'Mitogen was disconnected from the remote environment while a call '
'was in-progress. If you feel this is in error, please file a bug. '
'Original error was: %s'
)
def __init__(self, connection, context, pipelined=False):
super(CallChain, self).__init__(context, pipelined)
#: The connection to reset on CallError.
self._connection = connection
def _rethrow(self, recv):
try:
return recv.get().unpickle()
except mitogen.core.ChannelError as e:
self._connection.reset()
raise ansible.errors.AnsibleConnectionFailure(
self.call_aborted_msg % (e,)
)
def call(self, func, *args, **kwargs):
"""
Like :meth:`mitogen.parent.CallChain.call`, but log timings.
"""
t0 = time.time()
try:
recv = self.call_async(func, *args, **kwargs)
return self._rethrow(recv)
finally:
LOG.debug('Call took %d ms: %r', 1000 * (time.time() - t0),
mitogen.parent.CallSpec(func, args, kwargs))
class Connection(ansible.plugins.connection.ConnectionBase):
#: The :class:`ansible_mitogen.process.Binding` representing the connection
#: multiplexer this connection's target is assigned to. :data:`None` when
#: disconnected.
binding = None
#: mitogen.parent.Context for the target account on the target, possibly
#: reached via become.
context = None
#: Context for the login account on the target. This is always the login
#: account, even when become=True.
login_context = None
#: Only sudo, su, and doas are supported for now.
become_methods = ['sudo', 'su', 'doas']
#: Dict containing init_child() return value as recorded at startup by
#: ContextService. Contains:
#:
#: fork_context: Context connected to the fork parent : process in the
#: target account.
#: home_dir: Target context's home directory.
#: good_temp_dir: A writeable directory where new temporary directories
#: can be created.
init_child_result = None
#: A :class:`mitogen.parent.CallChain` for calls made to the target
#: account, to ensure subsequent calls fail with the original exception if
#: pipelined directory creation or file transfer fails.
chain = None
#
# Note: any of the attributes below may be :data:`None` if the connection
# plugin was constructed directly by a non-cooperative action, such as in
# the case of the synchronize module.
#
#: Set to task_vars by on_action_run().
_task_vars = None
#: Set by on_action_run()
delegate_to_hostname = None
#: Set to '_loader.get_basedir()' by on_action_run(). Used by mitogen_local
#: to change the working directory to that of the current playbook,
#: matching vanilla Ansible behaviour.
loader_basedir = None
def __del__(self):
"""
Ansible cannot be trusted to always call close() e.g. the synchronize
action constructs a local connection like this. So provide a destructor
in the hopes of catching these cases.
"""
# https://github.com/dw/mitogen/issues/140
self.close()
def on_action_run(self, task_vars, delegate_to_hostname, loader_basedir):
"""
Invoked by ActionModuleMixin to indicate a new task is about to start
executing. We use the opportunity to grab relevant bits from the
task-specific data.
:param dict task_vars:
Task variable dictionary.
:param str delegate_to_hostname:
:data:`None`, or the template-expanded inventory hostname this task
is being delegated to. A similar variable exists on PlayContext
when ``delegate_to:`` is active, however it is unexpanded.
:param str loader_basedir:
Loader base directory; see :attr:`loader_basedir`.
"""
self._task_vars = task_vars
self.delegate_to_hostname = delegate_to_hostname
self.loader_basedir = loader_basedir
self._put_connection()
def _get_task_vars(self):
"""
More information is needed than normally provided to an Ansible
connection. For proxied connections, intermediary configuration must
be inferred, and for any connection the configured Python interpreter
must be known.
There is no clean way to access this information that would not deviate
from the running Ansible version. The least invasive method known is to
reuse the running task's task_vars dict.
This method walks the stack to find task_vars of the Action plugin's
run(), or if no Action is present, from Strategy's _execute_meta(), as
in the case of 'meta: reset_connection'. The stack is walked in
addition to subclassing Action.run()/on_action_run(), as it is possible
for new connections to be constructed in addition to the preconstructed
connection passed into any running action.
"""
if self._task_vars is not None:
return self._task_vars
f = sys._getframe()
while f:
if f.f_code.co_name == 'run':
f_locals = f.f_locals
f_self = f_locals.get('self')
if isinstance(f_self, ansible_mitogen.mixins.ActionModuleMixin):
task_vars = f_locals.get('task_vars')
if task_vars:
LOG.debug('recovered task_vars from Action')
return task_vars
elif f.f_code.co_name == '_execute_meta':
f_all_vars = f.f_locals.get('all_vars')
if isinstance(f_all_vars, dict):
LOG.debug('recovered task_vars from meta:')
return f_all_vars
f = f.f_back
raise ansible.errors.AnsibleConnectionFailure(task_vars_msg)
def get_host_vars(self, inventory_hostname):
"""
Fetch the HostVars for a host.
:returns:
Variables dictionary or :data:`None`.
:raises ansible.errors.AnsibleConnectionFailure:
Task vars unavailable.
"""
task_vars = self._get_task_vars()
hostvars = task_vars.get('hostvars')
if hostvars:
return hostvars.get(inventory_hostname)
raise ansible.errors.AnsibleConnectionFailure(task_vars_msg)
def get_task_var(self, key, default=None):
"""
Fetch the value of a task variable related to connection configuration,
or, if delegate_to is active, fetch the same variable via HostVars for
the delegated-to machine.
When running with delegate_to, Ansible tasks have variables associated
with the original machine, not the delegated-to machine, therefore it
does not make sense to extract connection-related configuration for the
delegated-to machine from them.
"""
task_vars = self._get_task_vars()
if self.delegate_to_hostname is None:
if key in task_vars:
return task_vars[key]
else:
delegated_vars = task_vars['ansible_delegated_vars']
if self.delegate_to_hostname in delegated_vars:
task_vars = delegated_vars[self.delegate_to_hostname]
if key in task_vars:
return task_vars[key]
return default
@property
def homedir(self):
self._connect()
return self.init_child_result['home_dir']
def get_binding(self):
"""
Return the :class:`ansible_mitogen.process.Binding` representing the
process that hosts the physical connection and services (context
establishment, file transfer, ..) for our desired target.
"""
assert self.binding is not None
return self.binding
@property
def connected(self):
return self.context is not None
def _spec_from_via(self, proxied_inventory_name, via_spec):
"""
Produce a dict connection specifiction given a string `via_spec`, of
the form `[[become_method:]become_user@]inventory_hostname`.
"""
become_user, _, inventory_name = via_spec.rpartition('@')
become_method, _, become_user = become_user.rpartition(':')
# must use __contains__ to avoid a TypeError for a missing host on
# Ansible 2.3.
via_vars = self.get_host_vars(inventory_name)
if via_vars is None:
raise ansible.errors.AnsibleConnectionFailure(
self.unknown_via_msg % (
via_spec,
proxied_inventory_name,
)
)
return ansible_mitogen.transport_config.MitogenViaSpec(
inventory_name=inventory_name,
play_context=self._play_context,
host_vars=dict(via_vars), # TODO: make it lazy
become_method=become_method or None,
become_user=become_user or None,
)
unknown_via_msg = 'mitogen_via=%s of %s specifies an unknown hostname'
via_cycle_msg = 'mitogen_via=%s of %s creates a cycle (%s)'
def _stack_from_spec(self, spec, stack=(), seen_names=()):
"""
Return a tuple of ContextService parameter dictionaries corresponding
to the connection described by `spec`, and any connection referenced by
its `mitogen_via` or `become` fields. Each element is a dict of the
form::
{
# Optional. If present and `True`, this hop is elegible for
# interpreter recycling.
"enable_lru": True,
# mitogen.master.Router method name.
"method": "ssh",
# mitogen.master.Router method kwargs.
"kwargs": {
"hostname": "..."
}
}
:param ansible_mitogen.transport_config.Spec spec:
Connection specification.
:param tuple stack:
Stack elements from parent call (used for recursion).
:param tuple seen_names:
Inventory hostnames from parent call (cycle detection).
:returns:
Tuple `(stack, seen_names)`.
"""
if spec.inventory_name() in seen_names:
raise ansible.errors.AnsibleConnectionFailure(
self.via_cycle_msg % (
spec.mitogen_via(),
spec.inventory_name(),
' -> '.join(reversed(
seen_names + (spec.inventory_name(),)
)),
)
)
if spec.mitogen_via():
stack = self._stack_from_spec(
self._spec_from_via(spec.inventory_name(), spec.mitogen_via()),
stack=stack,
seen_names=seen_names + (spec.inventory_name(),),
)
stack += (CONNECTION_METHOD[spec.transport()](spec),)
if spec.become() and ((spec.become_user() != spec.remote_user()) or
C.BECOME_ALLOW_SAME_USER):
stack += (CONNECTION_METHOD[spec.become_method()](spec),)
return stack
def _build_stack(self):
"""
Construct a list of dictionaries representing the connection
configuration between the controller and the target. This is
additionally used by the integration tests "mitogen_get_stack" action
to fetch the would-be connection configuration.
"""
spec = ansible_mitogen.transport_config.PlayContextSpec(
connection=self,
play_context=self._play_context,
transport=self.transport,
inventory_name=self.get_task_var('inventory_hostname'),
)
stack = self._stack_from_spec(spec)
return spec.inventory_name(), stack
def _connect_stack(self, stack):
"""
Pass `stack` to ContextService, requesting a copy of the context object
representing the last tuple element. If no connection exists yet,
ContextService will recursively establish it before returning it or
throwing an error.
See :meth:`ansible_mitogen.services.ContextService.get` docstring for
description of the returned dictionary.
"""
try:
dct = mitogen.service.call(
call_context=self.binding.get_service_context(),
service_name='ansible_mitogen.services.ContextService',
method_name='get',
stack=mitogen.utils.cast(list(stack)),
)
except mitogen.core.CallError:
LOG.warning('Connection failed; stack configuration was:\n%s',
pprint.pformat(stack))
raise
if dct['msg']:
if dct['method_name'] in self.become_methods:
raise ansible.errors.AnsibleModuleError(dct['msg'])
raise ansible.errors.AnsibleConnectionFailure(dct['msg'])
self.context = dct['context']
self.chain = CallChain(self, self.context, pipelined=True)
if self._play_context.become:
self.login_context = dct['via']
else:
self.login_context = self.context
self.init_child_result = dct['init_child_result']
def get_good_temp_dir(self):
"""
Return the 'good temporary directory' as discovered by
:func:`ansible_mitogen.target.init_child` immediately after
ContextService constructed the target context.
"""
self._connect()
return self.init_child_result['good_temp_dir']
def _connect(self):
"""
Establish a connection to the master process's UNIX listener socket,
constructing a mitogen.master.Router to communicate with the master,
and a mitogen.parent.Context to represent it.
Depending on the original transport we should emulate, trigger one of
the _connect_*() service calls defined above to cause the master
process to establish the real connection on our behalf, or return a
reference to the existing one.
"""
if self.connected:
return
inventory_name, stack = self._build_stack()
worker_model = ansible_mitogen.process.get_worker_model()
self.binding = worker_model.get_binding(
mitogen.utils.cast(inventory_name)
)
self._connect_stack(stack)
def _put_connection(self):
"""
Forget everything we know about the connected context. This function
cannot be called _reset() since that name is used as a public API by
Ansible 2.4 wait_for_connection plug-in.
"""
if not self.context:
return
self.chain.reset()
mitogen.service.call(
call_context=self.binding.get_service_context(),
service_name='ansible_mitogen.services.ContextService',
method_name='put',
context=self.context
)
self.context = None
self.login_context = None
self.init_child_result = None
self.chain = None
def close(self):
"""
Arrange for the mitogen.master.Router running in the worker to
gracefully shut down, and wait for shutdown to complete. Safe to call
multiple times.
"""
self._put_connection()
if self.binding:
self.binding.close()
self.binding = None
reset_compat_msg = (
'Mitogen only supports "reset_connection" on Ansible 2.5.6 or later'
)
def reset(self):
"""
Explicitly terminate the connection to the remote host. This discards
any local state we hold for the connection, returns the Connection to
the 'disconnected' state, and informs ContextService the connection is
bad somehow, and should be shut down and discarded.
"""
if self._play_context.remote_addr is None:
# <2.5.6 incorrectly populate PlayContext for reset_connection
# https://github.com/ansible/ansible/issues/27520
raise ansible.errors.AnsibleConnectionFailure(
self.reset_compat_msg
)
# Clear out state in case we were ever connected.
self.close()
inventory_name, stack = self._build_stack()
if self._play_context.become:
stack = stack[:-1]
worker_model = ansible_mitogen.process.get_worker_model()
binding = worker_model.get_binding(inventory_name)
try:
mitogen.service.call(
call_context=binding.get_service_context(),
service_name='ansible_mitogen.services.ContextService',
method_name='reset',
stack=mitogen.utils.cast(list(stack)),
)
finally:
binding.close()
# Compatibility with Ansible 2.4 wait_for_connection plug-in.
_reset = reset
def get_chain(self, use_login=False, use_fork=False):
"""
Return the :class:`mitogen.parent.CallChain` to use for executing
function calls.
:param bool use_login:
If :data:`True`, always return the chain for the login account
rather than any active become user.
:param bool use_fork:
If :data:`True`, return the chain for the fork parent.
:returns mitogen.parent.CallChain:
"""
self._connect()
if use_login:
return self.login_context.default_call_chain
# See FORK_SUPPORTED comments in target.py.
if use_fork and self.init_child_result['fork_context'] is not None:
return self.init_child_result['fork_context'].default_call_chain
return self.chain
def spawn_isolated_child(self):
"""
Fork or launch a new child off the target context.
:returns:
mitogen.core.Context of the new child.
"""
return self.get_chain(use_fork=True).call(
ansible_mitogen.target.spawn_isolated_child
)
def get_extra_args(self):
"""
Overridden by connections/mitogen_kubectl.py to a list of additional
arguments for the command.
"""
# TODO: maybe use this for SSH too.
return []
def get_default_cwd(self):
"""
Overridden by connections/mitogen_local.py to emulate behaviour of CWD
being fixed to that of ActionBase._loader.get_basedir().
"""
return None
def get_default_env(self):
"""
Overridden by connections/mitogen_local.py to emulate behaviour of
WorkProcess environment inherited from WorkerProcess.
"""
return None
def exec_command(self, cmd, in_data='', sudoable=True, mitogen_chdir=None):
"""
Implement exec_command() by calling the corresponding
ansible_mitogen.target function in the target.
:param str cmd:
Shell command to execute.
:param bytes in_data:
Data to supply on ``stdin`` of the process.
:returns:
(return code, stdout bytes, stderr bytes)
"""
emulate_tty = (not in_data and sudoable)
rc, stdout, stderr = self.get_chain().call(
ansible_mitogen.target.exec_command,
cmd=mitogen.utils.cast(cmd),
in_data=mitogen.utils.cast(in_data),
chdir=mitogen_chdir or self.get_default_cwd(),
emulate_tty=emulate_tty,
)
stderr += b'Shared connection to %s closed.%s' % (
self._play_context.remote_addr.encode(),
(b'\r\n' if emulate_tty else b'\n'),
)
return rc, stdout, stderr
def fetch_file(self, in_path, out_path):
"""
Implement fetch_file() by calling the corresponding
ansible_mitogen.target function in the target.
:param str in_path:
Remote filesystem path to read.
:param str out_path:
Local filesystem path to write.
"""
self._connect()
ansible_mitogen.target.transfer_file(
context=self.context,
# in_path may be AnsibleUnicode
in_path=mitogen.utils.cast(in_path),
out_path=out_path
)
def put_data(self, out_path, data, mode=None, utimes=None):
"""
Implement put_file() by caling the corresponding ansible_mitogen.target
function in the target, transferring small files inline. This is
pipelined and will return immediately; failed transfers are reported as
exceptions in subsequent functon calls.
:param str out_path:
Remote filesystem path to write.
:param byte data:
File contents to put.
"""
self.get_chain().call_no_reply(
ansible_mitogen.target.write_path,
mitogen.utils.cast(out_path),
mitogen.core.Blob(data),
mode=mode,
utimes=utimes,
)
#: Maximum size of a small file before switching to streaming
#: transfer. This should really be the same as
#: mitogen.services.FileService.IO_SIZE, however the message format has
#: slightly more overhead, so just randomly subtract 4KiB.
SMALL_FILE_LIMIT = mitogen.core.CHUNK_SIZE - 4096
def _throw_io_error(self, e, path):
if e.args[0] == errno.ENOENT:
s = 'file or module does not exist: ' + path
raise ansible.errors.AnsibleFileNotFound(s)
def put_file(self, in_path, out_path):
"""
Implement put_file() by streamily transferring the file via
FileService.
:param str in_path:
Local filesystem path to read.
:param str out_path:
Remote filesystem path to write.
"""
try:
st = os.stat(in_path)
except OSError as e:
self._throw_io_error(e, in_path)
raise
if not stat.S_ISREG(st.st_mode):
raise IOError('%r is not a regular file.' % (in_path,))
# If the file is sufficiently small, just ship it in the argument list
# rather than introducing an extra RTT for the child to request it from
# FileService.
if st.st_size <= self.SMALL_FILE_LIMIT:
try:
fp = open(in_path, 'rb')
try:
s = fp.read(self.SMALL_FILE_LIMIT + 1)
finally:
fp.close()
except OSError:
self._throw_io_error(e, in_path)
raise
# Ensure did not grow during read.
if len(s) == st.st_size:
return self.put_data(out_path, s, mode=st.st_mode,
utimes=(st.st_atime, st.st_mtime))
self._connect()
mitogen.service.call(
call_context=self.binding.get_service_context(),
service_name='mitogen.service.FileService',
method_name='register',
path=mitogen.utils.cast(in_path)
)
# For now this must remain synchronous, as the action plug-in may have
# passed us a temporary file to transfer. A future FileService could
# maintain an LRU list of open file descriptors to keep the temporary
# file alive, but that requires more work.
self.get_chain().call(
ansible_mitogen.target.transfer_file,
context=self.binding.get_child_service_context(),
in_path=in_path,
out_path=out_path
)
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""
Stable names for PluginLoader instances across Ansible versions.
"""
from __future__ import absolute_import
__all__ = [
'action_loader',
'connection_loader',
'module_loader',
'module_utils_loader',
'shell_loader',
'strategy_loader',
]
try:
from ansible.plugins.loader import action_loader
from ansible.plugins.loader import connection_loader
from ansible.plugins.loader import module_loader
from ansible.plugins.loader import module_utils_loader
from ansible.plugins.loader import shell_loader
from ansible.plugins.loader import strategy_loader
except ImportError: # Ansible <2.4
from ansible.plugins import action_loader
from ansible.plugins import connection_loader
from ansible.plugins import module_loader
from ansible.plugins import module_utils_loader
from ansible.plugins import shell_loader
from ansible.plugins import strategy_loader
# These are original, unwrapped implementations
action_loader__get = action_loader.get
connection_loader__get = connection_loader.get
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import
import logging
import os
import mitogen.core
import mitogen.utils
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
#: The process name set via :func:`set_process_name`.
_process_name = None
#: The PID of the process that last called :func:`set_process_name`, so its
#: value can be ignored in unknown fork children.
_process_pid = None
def set_process_name(name):
"""
Set a name to adorn log messages with.
"""
global _process_name
_process_name = name
global _process_pid
_process_pid = os.getpid()
class Handler(logging.Handler):
"""
Use Mitogen's log format, but send the result to a Display method.
"""
def __init__(self, normal_method):
logging.Handler.__init__(self)
self.formatter = mitogen.utils.log_get_formatter()
self.normal_method = normal_method
#: Set of target loggers that produce warnings and errors that spam the
#: console needlessly. Their log level is forced to INFO. A better strategy
#: may simply be to bury all target logs in DEBUG output, but not by
#: overriding their log level as done here.
NOISY_LOGGERS = frozenset([
'dnf', # issue #272; warns when a package is already installed.
'boto', # issue #541; normal boto retry logic can cause ERROR logs.
])
def emit(self, record):
mitogen_name = getattr(record, 'mitogen_name', '')
if mitogen_name == 'stderr':
record.levelno = logging.ERROR
if mitogen_name in self.NOISY_LOGGERS and record.levelno >= logging.WARNING:
record.levelno = logging.DEBUG
if _process_pid == os.getpid():
process_name = _process_name
else:
process_name = '?'
s = '[%-4s %d] %s' % (process_name, os.getpid(), self.format(record))
if record.levelno >= logging.ERROR:
display.error(s, wrap_text=False)
elif record.levelno >= logging.WARNING:
display.warning(s, formatted=True)
else:
self.normal_method(s)
def setup():
"""
Install handlers for Mitogen loggers to redirect them into the Ansible
display framework. Ansible installs its own logging framework handlers when
C.DEFAULT_LOG_PATH is set, therefore disable propagation for our handlers.
"""
l_mitogen = logging.getLogger('mitogen')
l_mitogen_io = logging.getLogger('mitogen.io')
l_ansible_mitogen = logging.getLogger('ansible_mitogen')
l_operon = logging.getLogger('operon')
for logger in l_mitogen, l_mitogen_io, l_ansible_mitogen, l_operon:
logger.handlers = [Handler(display.vvv)]
logger.propagate = False
if display.verbosity > 2:
l_ansible_mitogen.setLevel(logging.DEBUG)
l_mitogen.setLevel(logging.DEBUG)
else:
# Mitogen copies the active log level into new children, allowing them
# to filter tiny messages before they hit the network, and therefore
# before they wake the IO loop. Explicitly setting INFO saves ~4%
# running against just the local machine.
l_mitogen.setLevel(logging.ERROR)
l_ansible_mitogen.setLevel(logging.ERROR)
if display.verbosity > 3:
l_mitogen_io.setLevel(logging.DEBUG)
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import
import logging
import os
import pwd
import random
import traceback
try:
from shlex import quote as shlex_quote
except ImportError:
from pipes import quote as shlex_quote
from ansible.module_utils._text import to_bytes
from ansible.parsing.utils.jsonify import jsonify
import ansible
import ansible.constants
import ansible.plugins
import ansible.plugins.action
import mitogen.core
import mitogen.select
import mitogen.utils
import ansible_mitogen.connection
import ansible_mitogen.planner
import ansible_mitogen.target
from ansible.module_utils._text import to_text
try:
from ansible.utils.unsafe_proxy import wrap_var
except ImportError:
from ansible.vars.unsafe_proxy import wrap_var
LOG = logging.getLogger(__name__)
class ActionModuleMixin(ansible.plugins.action.ActionBase):
"""
The Mitogen-patched PluginLoader dynamically mixes this into every action
class that Ansible attempts to load. It exists to override all the
assumptions built into the base action class that should really belong in
some middle layer, or at least in the connection layer.
Functionality is defined here for:
* Capturing the final set of task variables and giving Connection a chance
to update its idea of the correct execution environment, before any
attempt is made to call a Connection method. While it's not expected for
the interpreter to change on a per-task basis, Ansible permits this, and
so it must be supported.
* Overriding lots of methods that try to call out to shell for mundane
reasons, such as copying files around, changing file permissions,
creating temporary directories and suchlike.
* Short-circuiting any use of Ansiballz or related code for executing a
module remotely using shell commands and SSH.
* Short-circuiting most of the logic in dealing with the fact that Ansible
always runs become: tasks across at least the SSH user account and the
destination user account, and handling the security permission issues
that crop up due to this. Mitogen always runs a task completely within
the target user account, so it's not a problem for us.
"""
def __init__(self, task, connection, *args, **kwargs):
"""
Verify the received connection is really a Mitogen connection. If not,
transmute this instance back into the original unadorned base class.
This allows running the Mitogen strategy in mixed-target playbooks,
where some targets use SSH while others use WinRM or some fancier UNIX
connection plug-in. That's because when the Mitogen strategy is active,
ActionModuleMixin is unconditionally mixed into any action module that
is instantiated, and there is no direct way for the monkey-patch to
know what kind of connection will be used upfront.
"""
super(ActionModuleMixin, self).__init__(task, connection, *args, **kwargs)
if not isinstance(connection, ansible_mitogen.connection.Connection):
_, self.__class__ = type(self).__bases__
def run(self, tmp=None, task_vars=None):
"""
Override run() to notify Connection of task-specific data, so it has a
chance to know e.g. the Python interpreter in use.
"""
self._connection.on_action_run(
task_vars=task_vars,
delegate_to_hostname=self._task.delegate_to,
loader_basedir=self._loader.get_basedir(),
)
return super(ActionModuleMixin, self).run(tmp, task_vars)
COMMAND_RESULT = {
'rc': 0,
'stdout': '',
'stdout_lines': [],
'stderr': ''
}
def fake_shell(self, func, stdout=False):
"""
Execute a function and decorate its return value in the style of
_low_level_execute_command(). This produces a return value that looks
like some shell command was run, when really func() was implemented
entirely in Python.
If the function raises :py:class:`mitogen.core.CallError`, this will be
translated into a failed shell command with a non-zero exit status.
:param func:
Function invoked as `func()`.
:returns:
See :py:attr:`COMMAND_RESULT`.
"""
dct = self.COMMAND_RESULT.copy()
try:
rc = func()
if stdout:
dct['stdout'] = repr(rc)
except mitogen.core.CallError:
LOG.exception('While emulating a shell command')
dct['rc'] = 1
dct['stderr'] = traceback.format_exc()
return dct
def _remote_file_exists(self, path):
"""
Determine if `path` exists by directly invoking os.path.exists() in the
target user account.
"""
LOG.debug('_remote_file_exists(%r)', path)
return self._connection.get_chain().call(
ansible_mitogen.target.file_exists,
mitogen.utils.cast(path)
)
def _configure_module(self, module_name, module_args, task_vars=None):
"""
Mitogen does not use the Ansiballz framework. This call should never
happen when ActionMixin is active, so crash if it does.
"""
assert False, "_configure_module() should never be called."
def _is_pipelining_enabled(self, module_style, wrap_async=False):
"""
Mitogen does not use SSH pipelining. This call should never happen when
ActionMixin is active, so crash if it does.
"""
assert False, "_is_pipelining_enabled() should never be called."
def _generate_tmp_path(self):
return os.path.join(
self._connection.get_good_temp_dir(),
'ansible_mitogen_action_%016x' % (
random.getrandbits(8*8),
)
)
def _make_tmp_path(self, remote_user=None):
"""
Create a temporary subdirectory as a child of the temporary directory
managed by the remote interpreter.
"""
LOG.debug('_make_tmp_path(remote_user=%r)', remote_user)
path = self._generate_tmp_path()
LOG.debug('Temporary directory: %r', path)
self._connection.get_chain().call_no_reply(os.mkdir, path)
self._connection._shell.tmpdir = path
return path
def _remove_tmp_path(self, tmp_path):
"""
Replace the base implementation's invocation of rm -rf, replacing it
with a pipelined call to :func:`ansible_mitogen.target.prune_tree`.
"""
LOG.debug('_remove_tmp_path(%r)', tmp_path)
if tmp_path is None and ansible.__version__ > '2.6':
tmp_path = self._connection._shell.tmpdir # 06f73ad578d
if tmp_path is not None:
self._connection.get_chain().call_no_reply(
ansible_mitogen.target.prune_tree,
tmp_path,
)
self._connection._shell.tmpdir = None
def _transfer_data(self, remote_path, data):
"""
Used by the base _execute_module(), and in <2.4 also by the template
action module, and probably others.
"""
if isinstance(data, dict):
data = jsonify(data)
if not isinstance(data, bytes):
data = to_bytes(data, errors='surrogate_or_strict')
LOG.debug('_transfer_data(%r, %s ..%d bytes)',
remote_path, type(data), len(data))
self._connection.put_data(remote_path, data)
return remote_path
#: Actions listed here cause :func:`_fixup_perms2` to avoid a needless
#: roundtrip, as they modify file modes separately afterwards. This is due
#: to the method prototype having a default of `execute=True`.
FIXUP_PERMS_RED_HERRING = set(['copy'])
def _fixup_perms2(self, remote_paths, remote_user=None, execute=True):
"""
Mitogen always executes ActionBase helper methods in the context of the
target user account, so it is never necessary to modify permissions
except to ensure the execute bit is set if requested.
"""
LOG.debug('_fixup_perms2(%r, remote_user=%r, execute=%r)',
remote_paths, remote_user, execute)
if execute and self._task.action not in self.FIXUP_PERMS_RED_HERRING:
return self._remote_chmod(remote_paths, mode='u+x')
return self.COMMAND_RESULT.copy()
def _remote_chmod(self, paths, mode, sudoable=False):
"""
Issue an asynchronous set_file_mode() call for every path in `paths`,
then format the resulting return value list with fake_shell().
"""
LOG.debug('_remote_chmod(%r, mode=%r, sudoable=%r)',
paths, mode, sudoable)
return self.fake_shell(lambda: mitogen.select.Select.all(
self._connection.get_chain().call_async(
ansible_mitogen.target.set_file_mode, path, mode
)
for path in paths
))
def _remote_chown(self, paths, user, sudoable=False):
"""
Issue an asynchronous os.chown() call for every path in `paths`, then
format the resulting return value list with fake_shell().
"""
LOG.debug('_remote_chown(%r, user=%r, sudoable=%r)',
paths, user, sudoable)
ent = self._connection.get_chain().call(pwd.getpwnam, user)
return self.fake_shell(lambda: mitogen.select.Select.all(
self._connection.get_chain().call_async(
os.chown, path, ent.pw_uid, ent.pw_gid
)
for path in paths
))
def _remote_expand_user(self, path, sudoable=True):
"""
Replace the base implementation's attempt to emulate
os.path.expanduser() with an actual call to os.path.expanduser().
:param bool sudoable:
If :data:`True`, indicate unqualified tilde ("~" with no username)
should be evaluated in the context of the login account, not any
become_user.
"""
LOG.debug('_remote_expand_user(%r, sudoable=%r)', path, sudoable)
if not path.startswith('~'):
# /home/foo -> /home/foo
return path
if sudoable or not self._play_context.become:
if path == '~':
# ~ -> /home/dmw
return self._connection.homedir
if path.startswith('~/'):
# ~/.ansible -> /home/dmw/.ansible
return os.path.join(self._connection.homedir, path[2:])
# ~root/.ansible -> /root/.ansible
return self._connection.get_chain(use_login=(not sudoable)).call(
os.path.expanduser,
mitogen.utils.cast(path),
)
def get_task_timeout_secs(self):
"""
Return the task "async:" value, portable across 2.4-2.5.
"""
try:
return self._task.async_val
except AttributeError:
return getattr(self._task, 'async')
def _set_temp_file_args(self, module_args, wrap_async):
# Ansible>2.5 module_utils reuses the action's temporary directory if
# one exists. Older versions error if this key is present.
if ansible.__version__ > '2.5':
if wrap_async:
# Sharing is not possible with async tasks, as in that case,
# the directory must outlive the action plug-in.
module_args['_ansible_tmpdir'] = None
else:
module_args['_ansible_tmpdir'] = self._connection._shell.tmpdir
# If _ansible_tmpdir is unset, Ansible>2.6 module_utils will use
# _ansible_remote_tmp as the location to create the module's temporary
# directory. Older versions error if this key is present.
if ansible.__version__ > '2.6':
module_args['_ansible_remote_tmp'] = (
self._connection.get_good_temp_dir()
)
def _execute_module(self, module_name=None, module_args=None, tmp=None,
task_vars=None, persist_files=False,
delete_remote_tmp=True, wrap_async=False):
"""
Collect up a module's execution environment then use it to invoke
target.run_module() or helpers.run_module_async() in the target
context.
"""
if module_name is None:
module_name = self._task.action
if module_args is None:
module_args = self._task.args
if task_vars is None:
task_vars = {}
self._update_module_args(module_name, module_args, task_vars)
env = {}
self._compute_environment_string(env)
self._set_temp_file_args(module_args, wrap_async)
self._connection._connect()
result = ansible_mitogen.planner.invoke(
ansible_mitogen.planner.Invocation(
action=self,
connection=self._connection,
module_name=mitogen.core.to_text(module_name),
module_args=mitogen.utils.cast(module_args),
task_vars=task_vars,
templar=self._templar,
env=mitogen.utils.cast(env),
wrap_async=wrap_async,
timeout_secs=self.get_task_timeout_secs(),
)
)
if tmp and ansible.__version__ < '2.5' and delete_remote_tmp:
# Built-in actions expected tmpdir to be cleaned up automatically
# on _execute_module().
self._remove_tmp_path(tmp)
return wrap_var(result)
def _postprocess_response(self, result):
"""
Apply fixups mimicking ActionBase._execute_module(); this is copied
verbatim from action/__init__.py, the guts of _parse_returned_data are
garbage and should be removed or reimplemented once tests exist.
:param dict result:
Dictionary with format::
{
"rc": int,
"stdout": "stdout data",
"stderr": "stderr data"
}
"""
data = self._parse_returned_data(result)
# Cutpasted from the base implementation.
if 'stdout' in data and 'stdout_lines' not in data:
data['stdout_lines'] = (data['stdout'] or u'').splitlines()
if 'stderr' in data and 'stderr_lines' not in data:
data['stderr_lines'] = (data['stderr'] or u'').splitlines()
return data
def _low_level_execute_command(self, cmd, sudoable=True, in_data=None,
executable=None,
encoding_errors='surrogate_then_replace',
chdir=None):
"""
Override the base implementation by simply calling
target.exec_command() in the target context.
"""
LOG.debug('_low_level_execute_command(%r, in_data=%r, exe=%r, dir=%r)',
cmd, type(in_data), executable, chdir)
if executable is None: # executable defaults to False
executable = self._play_context.executable
if executable:
cmd = executable + ' -c ' + shlex_quote(cmd)
rc, stdout, stderr = self._connection.exec_command(
cmd=cmd,
in_data=in_data,
sudoable=sudoable,
mitogen_chdir=chdir,
)
stdout_text = to_text(stdout, errors=encoding_errors)
return {
'rc': rc,
'stdout': stdout_text,
'stdout_lines': stdout_text.splitlines(),
'stderr': stderr,
}
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import
from __future__ import unicode_literals
import collections
import imp
import os
import mitogen.master
PREFIX = 'ansible.module_utils.'
Module = collections.namedtuple('Module', 'name path kind parent')
def get_fullname(module):
"""
Reconstruct a Module's canonical path by recursing through its parents.
"""
bits = [str(module.name)]
while module.parent:
bits.append(str(module.parent.name))
module = module.parent
return '.'.join(reversed(bits))
def get_code(module):
"""
Compile and return a Module's code object.
"""
fp = open(module.path, 'rb')
try:
return compile(fp.read(), str(module.name), 'exec')
finally:
fp.close()
def is_pkg(module):
"""
Return :data:`True` if a Module represents a package.
"""
return module.kind == imp.PKG_DIRECTORY
def find(name, path=(), parent=None):
"""
Return a Module instance describing the first matching module found on the
search path.
:param str name:
Module name.
:param list path:
List of directory names to search for the module.
:param Module parent:
Optional module parent.
"""
assert isinstance(path, tuple)
head, _, tail = name.partition('.')
try:
tup = imp.find_module(head, list(path))
except ImportError:
return parent
fp, modpath, (suffix, mode, kind) = tup
if fp:
fp.close()
if parent and modpath == parent.path:
# 'from timeout import timeout', where 'timeout' is a function but also
# the name of the module being imported.
return None
if kind == imp.PKG_DIRECTORY:
modpath = os.path.join(modpath, '__init__.py')
module = Module(head, modpath, kind, parent)
# TODO: this code is entirely wrong on Python 3.x, but works well enough
# for Ansible. We need a new find_child() that only looks in the package
# directory, never falling back to the parent search path.
if tail and kind == imp.PKG_DIRECTORY:
return find_relative(module, tail, path)
return module
def find_relative(parent, name, path=()):
if parent.kind == imp.PKG_DIRECTORY:
path = (os.path.dirname(parent.path),) + path
return find(name, path, parent=parent)
def scan_fromlist(code):
for level, modname_s, fromlist in mitogen.master.scan_code_imports(code):
for name in fromlist:
yield level, '%s.%s' % (modname_s, name)
if not fromlist:
yield level, modname_s
def scan(module_name, module_path, search_path):
module = Module(module_name, module_path, imp.PY_SOURCE, None)
stack = [module]
seen = set()
while stack:
module = stack.pop(0)
for level, fromname in scan_fromlist(get_code(module)):
if not fromname.startswith(PREFIX):
continue
imported = find(fromname[len(PREFIX):], search_path)
if imported is None or imported in seen:
continue
seen.add(imported)
stack.append(imported)
parent = imported.parent
while parent:
fullname = get_fullname(parent)
module = Module(fullname, parent.path, parent.kind, None)
if module not in seen:
seen.add(module)
stack.append(module)
parent = parent.parent
return sorted(
(PREFIX + get_fullname(module), module.path, is_pkg(module))
for module in seen
)
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from __future__ import absolute_import
from __future__ import unicode_literals
import mitogen.core
def parse_script_interpreter(source):
"""
Parse the script interpreter portion of a UNIX hashbang using the rules
Linux uses.
:param str source: String like "/usr/bin/env python".
:returns:
Tuple of `(interpreter, arg)`, where `intepreter` is the script
interpreter and `arg` is its sole argument if present, otherwise
:py:data:`None`.
"""
# Find terminating newline. Assume last byte of binprm_buf if absent.
nl = source.find(b'\n', 0, 128)
if nl == -1:
nl = min(128, len(source))
# Split once on the first run of whitespace. If no whitespace exists,
# bits just contains the interpreter filename.
bits = source[0:nl].strip().split(None, 1)
if len(bits) == 1:
return mitogen.core.to_text(bits[0]), None
return mitogen.core.to_text(bits[0]), mitogen.core.to_text(bits[1])
def parse_hashbang(source):
"""
Parse a UNIX "hashbang line" using the syntax supported by Linux.
:param str source: String like "#!/usr/bin/env python".
:returns:
Tuple of `(interpreter, arg)`, where `intepreter` is the script
interpreter and `arg` is its sole argument if present, otherwise
:py:data:`None`.
"""
# Linux requires first 2 bytes with no whitespace, pretty sure it's the
# same everywhere. See binfmt_script.c.
if not source.startswith(b'#!'):
return None, None
return parse_script_interpreter(source[2:])
# Copyright 2019, David Wilson
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""
Classes to detect each case from [0] and prepare arguments necessary for the
corresponding Runner class within the target, including preloading requisite
files/modules known missing.
[0] "Ansible Module Architecture", developing_program_flow_modules.html
"""
from __future__ import absolute_import
from __future__ import unicode_literals
import json
import logging
import os
import random
from ansible.executor import module_common
import ansible.errors
import ansible.module_utils
import ansible.release
import mitogen.core
import mitogen.select
import ansible_mitogen.loaders
import ansible_mitogen.parsing
import ansible_mitogen.target
LOG = logging.getLogger(__name__)
NO_METHOD_MSG = 'Mitogen: no invocation method found for: '
NO_INTERPRETER_MSG = 'module (%s) is missing interpreter line'
NO_MODULE_MSG = 'The module %s was not found in configured module paths.'
_planner_by_path = {}
class Invocation(object):
"""
Collect up a module's execution environment then use it to invoke
target.run_module() or helpers.run_module_async() in the target context.
"""
def __init__(self, action, connection, module_name, module_args,
task_vars, templar, env, wrap_async, timeout_secs):
#: ActionBase instance invoking the module. Required to access some
#: output postprocessing methods that don't belong in ActionBase at
#: all.
self.action = action
#: Ansible connection to use to contact the target. Must be an
#: ansible_mitogen connection.
self.connection = connection
#: Name of the module ('command', 'shell', etc.) to execute.
self.module_name = module_name
#: Final module arguments.
self.module_args = module_args
#: Task variables, needed to extract ansible_*_interpreter.
self.task_vars = task_vars
#: Templar, needed to extract ansible_*_interpreter.
self.templar = templar
#: Final module environment.
self.env = env
#: Boolean, if :py:data:`True`, launch the module asynchronously.
self.wrap_async = wrap_async
#: Integer, if >0, limit the time an asynchronous job may run for.
self.timeout_secs = timeout_secs
#: Initially ``None``, but set by :func:`invoke`. The path on the
#: master to the module's implementation file.
self.module_path = None
#: Initially ``None``, but set by :func:`invoke`. The raw source or
#: binary contents of the module.
self._module_source = None
def get_module_source(self):
if self._module_source is None:
self._module_source = read_file(self.module_path)
return self._module_source
def __repr__(self):
return 'Invocation(module_name=%s)' % (self.module_name,)
class Planner(object):
"""
A Planner receives a module name and the contents of its implementation
file, indicates whether or not it understands how to run the module, and
exports a method to run the module.
"""
def __init__(self, invocation):
self._inv = invocation
@classmethod
def detect(cls, path, source):
"""
Return true if the supplied `invocation` matches the module type
implemented by this planner.
"""
raise NotImplementedError()
def should_fork(self):
"""
Asynchronous tasks must always be forked.
"""
return self._inv.wrap_async
def get_push_files(self):
"""
Return a list of files that should be propagated to the target context
using PushFileService. The default implementation pushes nothing.
"""
return []
def get_module_deps(self):
"""
Return a list of the Python module names imported by the module.
"""
return []
def get_kwargs(self, **kwargs):
"""
If :meth:`detect` returned :data:`True`, plan for the module's
execution, including granting access to or delivering any files to it
that are known to be absent, and finally return a dict::
{
# Name of the class from runners.py that implements the
# target-side execution of this module type.
"runner_name": "...",
# Remaining keys are passed to the constructor of the class
# named by `runner_name`.
}
"""
binding = self._inv.connection.get_binding()
new = dict((mitogen.core.UnicodeType(k), kwargs[k])
for k in kwargs)
new.setdefault('good_temp_dir',
self._inv.connection.get_good_temp_dir())
new.setdefault('cwd', self._inv.connection.get_default_cwd())
new.setdefault('extra_env', self._inv.connection.get_default_env())
new.setdefault('emulate_tty', True)
new.setdefault('service_context', binding.get_child_service_context())
return new
def __repr__(self):
return '%s()' % (type(self).__name__,)
class BinaryPlanner(Planner):
"""
Binary modules take their arguments and will return data to Ansible in the
same way as want JSON modules.
"""
runner_name = 'BinaryRunner'
@classmethod
def detect(cls, path, source):
return module_common._is_binary(source)
def get_push_files(self):
return [mitogen.core.to_text(self._inv.module_path)]
def get_kwargs(self, **kwargs):
return super(BinaryPlanner, self).get_kwargs(
runner_name=self.runner_name,
module=self._inv.module_name,
path=self._inv.module_path,
json_args=json.dumps(self._inv.module_args),
env=self._inv.env,
**kwargs
)
class ScriptPlanner(BinaryPlanner):
"""
Common functionality for script module planners -- handle interpreter
detection and rewrite.
"""
def _rewrite_interpreter(self, path):
"""
Given the original interpreter binary extracted from the script's
interpreter line, look up the associated `ansible_*_interpreter`
variable, render it and return it.
:param str path:
Absolute UNIX path to original interpreter.
:returns:
Shell fragment prefix used to execute the script via "/bin/sh -c".
While `ansible_*_interpreter` documentation suggests shell isn't
involved here, the vanilla implementation uses it and that use is
exploited in common playbooks.
"""
key = u'ansible_%s_interpreter' % os.path.basename(path).strip()
try:
template = self._inv.task_vars[key]
except KeyError:
return path
return mitogen.utils.cast(self._inv.templar.template(template))
def _get_interpreter(self):
path, arg = ansible_mitogen.parsing.parse_hashbang(
self._inv.get_module_source()
)
if path is None:
raise ansible.errors.AnsibleError(NO_INTERPRETER_MSG % (
self._inv.module_name,
))
fragment = self._rewrite_interpreter(path)
if arg:
fragment += ' ' + arg
return fragment, path.startswith('python')
def get_kwargs(self, **kwargs):
interpreter_fragment, is_python = self._get_interpreter()
return super(ScriptPlanner, self).get_kwargs(
interpreter_fragment=interpreter_fragment,
is_python=is_python,
**kwargs
)
class JsonArgsPlanner(ScriptPlanner):
"""
Script that has its interpreter directive and the task arguments
substituted into its source as a JSON string.
"""
runner_name = 'JsonArgsRunner'
@classmethod
def detect(cls, path, source):
return module_common.REPLACER_JSONARGS in source
class WantJsonPlanner(ScriptPlanner):
"""
If a module has the string WANT_JSON in it anywhere, Ansible treats it as a
non-native module that accepts a filename as its only command line
parameter. The filename is for a temporary file containing a JSON string
containing the module's parameters. The module needs to open the file, read
and parse the parameters, operate on the data, and print its return data as
a JSON encoded dictionary to stdout before exiting.
These types of modules are self-contained entities. As of Ansible 2.1,
Ansible only modifies them to change a shebang line if present.
"""
runner_name = 'WantJsonRunner'
@classmethod
def detect(cls, path, source):
return b'WANT_JSON' in source
class NewStylePlanner(ScriptPlanner):
"""
The Ansiballz framework differs from module replacer in that it uses real
Python imports of things in ansible/module_utils instead of merely
preprocessing the module.
"""
runner_name = 'NewStyleRunner'
marker = b'from ansible.module_utils.'
@classmethod
def detect(cls, path, source):
return cls.marker in source
def _get_interpreter(self):
return None, None
def get_push_files(self):
return super(NewStylePlanner, self).get_push_files() + [
mitogen.core.to_text(path)
for fullname, path, is_pkg in self.get_module_map()['custom']
]
def get_module_deps(self):
return self.get_module_map()['builtin']
#: Module names appearing in this set always require forking, usually due
#: to some terminal leakage that cannot be worked around in any sane
#: manner.
ALWAYS_FORK_MODULES = frozenset([
'dnf', # issue #280; py-dnf/hawkey need therapy
'firewalld', # issue #570: ansible module_utils caches dbus conn
])
def should_fork(self):
"""
In addition to asynchronous tasks, new-style modules should be forked
if:
* the user specifies mitogen_task_isolation=fork, or
* the new-style module has a custom module search path, or
* the module is known to leak like a sieve.
"""
return (
super(NewStylePlanner, self).should_fork() or
(self._inv.task_vars.get('mitogen_task_isolation') == 'fork') or
(self._inv.module_name in self.ALWAYS_FORK_MODULES) or
(len(self.get_module_map()['custom']) > 0)
)
def get_search_path(self):
return tuple(
path
for path in ansible_mitogen.loaders.module_utils_loader._get_paths(
subdirs=False
)
)
_module_map = None
def get_module_map(self):
if self._module_map is None:
binding = self._inv.connection.get_binding()
self._module_map = mitogen.service.call(
call_context=binding.get_service_context(),
service_name='ansible_mitogen.services.ModuleDepService',
method_name='scan',
module_name='ansible_module_%s' % (self._inv.module_name,),
module_path=self._inv.module_path,
search_path=self.get_search_path(),
builtin_path=module_common._MODULE_UTILS_PATH,
context=self._inv.connection.context,
)
return self._module_map
def get_kwargs(self):
return super(NewStylePlanner, self).get_kwargs(
module_map=self.get_module_map(),
py_module_name=py_modname_from_path(
self._inv.module_name,
self._inv.module_path,
),
)
class ReplacerPlanner(NewStylePlanner):
"""
The Module Replacer framework is the original framework implementing
new-style modules. It is essentially a preprocessor (like the C
Preprocessor for those familiar with that programming language). It does
straight substitutions of specific substring patterns in the module file.
There are two types of substitutions.
* Replacements that only happen in the module file. These are public
replacement strings that modules can utilize to get helpful boilerplate
or access to arguments.
"from ansible.module_utils.MOD_LIB_NAME import *" is replaced with the
contents of the ansible/module_utils/MOD_LIB_NAME.py. These should only
be used with new-style Python modules.
"#<<INCLUDE_ANSIBLE_MODULE_COMMON>>" is equivalent to
"from ansible.module_utils.basic import *" and should also only apply to
new-style Python modules.
"# POWERSHELL_COMMON" substitutes the contents of
"ansible/module_utils/powershell.ps1". It should only be used with
new-style Powershell modules.
"""
runner_name = 'ReplacerRunner'
@classmethod
def detect(cls, path, source):
return module_common.REPLACER in source
class OldStylePlanner(ScriptPlanner):
runner_name = 'OldStyleRunner'
@classmethod
def detect(cls, path, source):
# Everything else.
return True
_planners = [
BinaryPlanner,
# ReplacerPlanner,
NewStylePlanner,
JsonArgsPlanner,
WantJsonPlanner,
OldStylePlanner,
]
try:
_get_ansible_module_fqn = module_common._get_ansible_module_fqn
except AttributeError:
_get_ansible_module_fqn = None
def py_modname_from_path(name, path):
"""
Fetch the logical name of a new-style module as it might appear in
:data:`sys.modules` of the target's Python interpreter.
* For Ansible <2.7, this is an unpackaged module named like
"ansible_module_%s".
* For Ansible <2.9, this is an unpackaged module named like
"ansible.modules.%s"
* Since Ansible 2.9, modules appearing within a package have the original
package hierarchy approximated on the target, enabling relative imports
to function correctly. For example, "ansible.modules.system.setup".
"""
# 2.9+
if _get_ansible_module_fqn:
try:
return _get_ansible_module_fqn(path)
except ValueError:
pass
if ansible.__version__ < '2.7':
return 'ansible_module_' + name
return 'ansible.modules.' + name
def read_file(path):
fd = os.open(path, os.O_RDONLY)
try:
bits = []
chunk = True
while True:
chunk = os.read(fd, 65536)
if not chunk:
break
bits.append(chunk)
finally:
os.close(fd)
return mitogen.core.b('').join(bits)
def _propagate_deps(invocation, planner, context):
binding = invocation.connection.get_binding()
mitogen.service.call(
call_context=binding.get_service_context(),
service_name='mitogen.service.PushFileService',
method_name='propagate_paths_and_modules',
context=context,
paths=planner.get_push_files(),
modules=planner.get_module_deps(),
)
def _invoke_async_task(invocation, planner):
job_id = '%016x' % random.randint(0, 2**64)
context = invocation.connection.spawn_isolated_child()
_propagate_deps(invocation, planner, context)
with mitogen.core.Receiver(context.router) as started_recv:
call_recv = context.call_async(
ansible_mitogen.target.run_module_async,
job_id=job_id,
timeout_secs=invocation.timeout_secs,
started_sender=started_recv.to_sender(),
kwargs=planner.get_kwargs(),
)
# Wait for run_module_async() to crash, or for AsyncRunner to indicate
# the job file has been written.
for msg in mitogen.select.Select([started_recv, call_recv]):
if msg.receiver is call_recv:
# It can only be an exception.
raise msg.unpickle()
break
return {
'stdout': json.dumps({
# modules/utilities/logic/async_wrapper.py::_run_module().
'changed': True,
'started': 1,
'finished': 0,
'ansible_job_id': job_id,
})
}
def _invoke_isolated_task(invocation, planner):
context = invocation.connection.spawn_isolated_child()
_propagate_deps(invocation, planner, context)
try:
return context.call(
ansible_mitogen.target.run_module,
kwargs=planner.get_kwargs(),
)
finally:
context.shutdown()
def _get_planner(name, path, source):
for klass in _planners:
if klass.detect(path, source):
LOG.debug('%r accepted %r (filename %r)', klass, name, path)
return klass
LOG.debug('%r rejected %r', klass, name)
raise ansible.errors.AnsibleError(NO_METHOD_MSG + repr(invocation))
def invoke(invocation):
"""
Find a Planner subclass corresnding to `invocation` and use it to invoke
the module.
:param Invocation invocation:
:returns:
Module return dict.
:raises ansible.errors.AnsibleError:
Unrecognized/unsupported module type.
"""
path = ansible_mitogen.loaders.module_loader.find_plugin(
invocation.module_name,
'',
)
if path is None:
raise ansible.errors.AnsibleError(NO_MODULE_MSG % (
invocation.module_name,
))
invocation.module_path = mitogen.core.to_text(path)
if invocation.module_path not in _planner_by_path:
_planner_by_path[invocation.module_path] = _get_planner(
invocation.module_name,
invocation.module_path,
invocation.get_module_source()
)
planner = _planner_by_path[invocation.module_path](invocation)
if invocation.wrap_async:
response = _invoke_async_task(invocation, planner)
elif planner.should_fork():
response = _invoke_isolated_task(invocation, planner)
else:
_propagate_deps(invocation, planner, invocation.connection.context)
response = invocation.connection.get_chain().call(
ansible_mitogen.target.run_module,
kwargs=planner.get_kwargs(),
)
return invocation.action._postprocess_response(response)