Thursday, 30 April 2020

Writing Production-ready Code; Nornir Edition

Let’s start with the least interesting topic first – documentation. You have to consider two aspects here:

1. Peripheral files: Files not part of the project, such as architecture diagrams or README files, are good places for newcomers to learn about your work. Be complete yet concise; describe how everything fits together as simply as you can. Invest 10 minutes to learn Markdown or ReStructured Text if you don’t know them.

2. The code itself: Perhaps you’ve heard the term “self-documenting code”. Python makes this easy as many of its idioms and semantics read like plain English. Resist the urge to use overly clever or complex techniques where they aren’t necessary. Comment liberally, not just for others, but as a favor to your future self. Developers tend to forget how their code works a few weeks after it has been written (at least, I know I do)!

I think it is beyond dispute that static code analysis tools such as linters, security scanners, and code formatters are great additions to any code project. I don’t have strong opinions on precisely which tools are the best, but I’ve grown comfortable with the following options. All of them can be installed using pip:

1. pylint: Python linter that checks for syntax errors, styling issues, and minor security issues
2. bandit: Python security analyzer that reports vulnerabilities based on severity and confidence
3. black: Python formatter to keep source code consistent (spacing, quotes, continuations, etc.)
4. yamllint: YAML syntax formatter; similar to pylint but for configuration files

Sometimes you won’t find a public linter for the code you care about. Time permitting, write your own. Because the narc project consumes JSON files as input, I wrote a simple jsonlint.py script that just finds all JSON files, attempts to parse Python objects from then, and fails if any exceptions are raised. That’s it. I’m only trying to answer the question “Is the file formatted correctly?” I’d rather know right away instead of waiting for Nornir to crash later.

failed = False
for varfile in os.listdir(path):
    if varfile.endswith(".json"):
        filepath = os.path.join(path, varfile)
        with open(filepath, "r") as handle:
            try:
                # Attempt to load the JSON data into Python objects
                json.load(handle)
            except json.decoder.JSONDecodeError as exc:
                # Print specific file and error condition, mark failure
                print(f"{filepath}: {exc}")
                failed = True

# If failure occurred, use rc=1 to signal an error
if failed:
    sys.exit(1)

These tools take little effort to deploy and have a very high “return on effort”. However, they are superficial in their test coverage and wholly insufficient by themselves. Most developers begin testing their code by first constructing unit tests. These test the smallest, atomic (indivisible) parts of a program, such as functions, methods, or classes. Like in electronics manufacturing, a component on a circuit board may be tested by measuring the voltage across two pins. This particular measurement is useless in the context of the board’s overall purpose, but is a critical component in a larger, complex system. The same concept is true for software projects.

It is conventional to contain all tests, unit or otherwise, in a tests/ directory parallel to the project’s source code. This is keeps things organized and allows for your code project and test structure to be designed differently. My jsonlint.py script lives here, along with several other files beginning with test_. This naming convention is common in Python projects to identify files containing tests. Popular Python testing tools/frameworks like pytest will automatically discover and execute them.

$ tree tests/
tests/
|-- data
| |-- cmd_checks.yaml
| `-- dummy_checks.yaml
|-- jsonlint.py
|-- test_get_cmd.py
`-- test_validation.py

Consider the test_get_cmd.py file first, which tests the get_cmd() function. This function takes in a dictionary representing an ASA rule to check, and expands it into a packet-tracer command that the ASA will understand. Some people call this “unparsing” as it transforms structured data into plain text. This process is deterministic and easy to test; given any dictionary, we can predict what the command should be. In the data/ directory, I’ve defined a few YAML files which contain these test cases. I usually recommend keeping static data out of your test code and instead developing general test processes instead. The narc project supports TCP, UDP, ICMP, and raw IP protocol flows. Therefore, my test file should have at least 4 cases. Using nested dictionaries, we can define individual cases that represent the chk input values, then the expected_cmd field contains the expected packet-tracer command. I think the file is self-explanatory, and you can check test_get_cmd.py to see how this file is consumed.

$ cat tests/data/cmd_checks.yaml
---
cmd_checks:
  tcp_full:
    in_intf: "inside"
    proto: "tcp"
    src_ip: "192.0.2.1"
    src_port: 5001
    dst_ip: "192.0.2.2"
    dst_port: 5002
    expected_cmd: >-
      packet-tracer input inside tcp
      192.0.2.1 5001 192.0.2.2 5002 xml
  udp_full:
    in_intf: "inside"
    proto: "udp"
    src_ip: "192.0.2.1"
    src_port: 5001
    dst_ip: "192.0.2.2"
    dst_port: 5002
    expected_cmd: >-
      packet-tracer input inside udp
      192.0.2.1 5001 192.0.2.2 5002 xml
  icmp_full:
    in_intf: "inside"
    proto: "icmp"
    src_ip: "192.0.2.1"
    dst_ip: "192.0.2.2"
    icmp_type: 8
    icmp_code: 0
    expected_cmd: >-
      packet-tracer input inside icmp
      192.0.2.1 8 0 192.0.2.2 xml
  rawip_full:
    in_intf: "inside"
    proto: 123
    src_ip: "192.0.2.1"
    dst_ip: "192.0.2.2"
    expected_cmd: >-
      packet-tracer input inside rawip
      192.0.2.1 123 192.0.2.2 xml
...

All good code projects perform some degree of input data validation. Suppose a user enters an IPv4 address of 192.0.2.1.7 or a TCP port of -1. Surely the ASA would throw an error message, but why let it get to that point? Problems don’t get better over time, and we should test for these conditions early. In general, we want to “fail fast”. That’s what the test_validation.py script does and it works in conjunction with the dummy_checks.yml file. Invalid “check” dictionaries should be logged and not sent to the network device.

As a brief aside, data validation is inherent when using modeling languages like YANG. This is one of the reasons why model-driven programmability and telemetry are growing in popularity. In addition to removing the arbitrariness of data structures, it enforces data compliance without explicit coding logic.

We’ve tested quite a bit so far, but we haven’t tied anything together yet. Always consider building in some kind of integration/system level testing to your project. For narc, I introduced a feature named “dryrun” and it is easily toggled using a CLI argument at runtime. This code bypasses the Netmiko logic and instead generates simulated (sometimes called “mocked”) XML output for each packet-tracer command. This runs instantly and doesn’t require access to any network devices. We don’t really care if the rules pass or fail (hint: they’ll always pass), just that the solution is plumbed together correctly.

The diagram below illustrates how mocking works at a high level, and the goal is to keep the detour as short and as transparent as possible. You want to maximize testing before and after the mocking activity. Given Nornir’s flexible architecture with easy-to-define custom tasks, I’ve created a custom _mock_packet_trace task. It looks and feels like network_send_command as it returns an identical result, but is designed for local testing.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides

How do we tie this seemingly complex string of events together? Opinions on this topic, as with everything in programming, run far and wide. I’m old school and prefer to use a Makefile, but more modern tools exist like Task which are YAML-based and less finicky. Some people just prefer shell scripts. Makefiles were traditionally used to compile code and link the resulting objects in languages like C. For Python projects, you can create “targets” or “goals” to run various tasks. Think of each target as a small shell script. For example, make lint will run the static code analysis tools pylint, bandit, black, yamllint, and the jsonlint.py script. Then, make unit will run pytest on all test_*.py files. Finally, make dry will execute the Nornir runbook in dryrun mode, testing the system as a whole (minus Netmiko) with mock data. You can also create operational targets unrelated to the project code. I often define make clean to remove any application artifacts, Python byte code .pyc files, and logs.

Rather than having to type out all of these targets, a single target can reference other targets. For example, consider the make test target which runs all 4 targets in the correct sequence. You can simplify it further by defining a “default goal” so that when only make is typed, it invokes make test. We developers are lazy and cherish saving 5 keystrokes per test run!

.DEFAULT_GOAL := test
.PHONY: test
test: clean lint unit dry

Ideally, typing make should test your entire project from the simplest syntax checking to the most involved integration/system testing. Here’s the full, unedited logs from my dev environment relating to the narc project. I recommend NOT obscuring your command outputs; it is useful to see which commands have generated which outputs.

$ make
Starting clean
find . -name "*.pyc" | xargs -r rm
rm -f nornir.log
rm -rf outputs/
Starting clean
Starting lint
find . -name "*.yaml" | xargs yamllint -s
python tests/jsonlint.py
find . -name "*.py" | xargs pylint

--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

find . -name "*.py" | xargs bandit --skip B101
[main] INFO profile include tests: None
[main] INFO profile exclude tests: None
[main] INFO cli include tests: None
[main] INFO cli exclude tests: B101
[main] INFO running on Python 3.7.3
Run started:2020-04-07 15:47:27.239623

Test results:
        No issues identified.

Code scanned:
        Total lines of code: 670
        Total lines skipped (#nosec): 0

Run metrics:
        Total issues (by severity):
                Undefined: 0.0
                Low: 0.0
                Medium: 0.0
                High: 0.0
        Total issues (by confidence):
                Undefined: 0.0
                Low: 0.0
                Medium: 0.0
                High: 0.0
Files skipped (0):
find . -name "*.py" | xargs black -l 85 --check
All done!
11 files would be left unchanged.
Completed lint
Starting unit tests
python -m pytest tests/ --verbose
================= test session starts ==================
platform linux -- Python 3.7.3, pytest-5.3.2, py-1.8.0, pluggy-0.13.1 -- /home/centos/environments/asapt/bin/python
cachedir: .pytest_cache
rootdir: /home/centos/code/narc
collected 11 items

tests/test_get_cmd.py::test_get_cmd_tcp PASSED [ 9%]
tests/test_get_cmd.py::test_get_cmd_udp PASSED [ 18%]
tests/test_get_cmd.py::test_get_cmd_icmp PASSED [ 27%]
tests/test_get_cmd.py::test_get_cmd_rawip PASSED [ 36%]
tests/test_validation.py::test_validate_id PASSED [ 45%]
tests/test_validation.py::test_validate_in_intf PASSED [ 54%]
tests/test_validation.py::test_validate_should PASSED [ 63%]
tests/test_validation.py::test_validate_ip PASSED [ 72%]
tests/test_validation.py::test_validate_proto PASSED [ 81%]
tests/test_validation.py::test_validate_port PASSED [ 90%]
tests/test_validation.py::test_validate_icmp PASSED [100%]

======================================= 11 passed in 0.09s ========================================
Completed unit tests
Starting dryruns
python runbook.py --dryrun --failonly
head -n 5 outputs/*
==> outputs/result.csv <==
host,id,proto,icmp type,icmp code,src_ip,src_port,dst_ip,dst_port,in_intf,out_intf,action,drop_reason,success

==> outputs/result.json <==
{}
==> outputs/result.txt <==
python runbook.py -d -s
ASAV1@2020-04-07T15:47:28.873590: loading YAML vars
ASAV1@2020-04-07T15:47:28.875094: loading vars succeeded
ASAV1@2020-04-07T15:47:28.875245: starting check DNS OUTBOUND (1/5)
ASAV1@2020-04-07T15:47:28.875291: completed check DNS OUTBOUND (1/5)
ASAV1@2020-04-07T15:47:28.875304: starting check HTTPS OUTBOUND (2/5)
ASAV1@2020-04-07T15:47:28.875333: completed check HTTPS OUTBOUND (2/5)
ASAV1@2020-04-07T15:47:28.875344: starting check SSH INBOUND (3/5)
ASAV1@2020-04-07T15:47:28.875371: completed check SSH INBOUND (3/5)
ASAV1@2020-04-07T15:47:28.875381: starting check PING OUTBOUND (4/5)
ASAV1@2020-04-07T15:47:28.875406: completed check PING OUTBOUND (4/5)
ASAV1@2020-04-07T15:47:28.875415: starting check L2TP OUTBOUND (5/5)
ASAV1@2020-04-07T15:47:28.875457: completed check L2TP OUTBOUND (5/5)
ASAV2@2020-04-07T15:47:28.878727: loading JSON vars
ASAV2@2020-04-07T15:47:28.878880: loading vars succeeded
ASAV2@2020-04-07T15:47:28.879018: starting check DNS OUTBOUND (1/5)
ASAV2@2020-04-07T15:47:28.879060: completed check DNS OUTBOUND (1/5)
ASAV2@2020-04-07T15:47:28.879073: starting check HTTPS OUTBOUND (2/5)
ASAV2@2020-04-07T15:47:28.879100: completed check HTTPS OUTBOUND (2/5)
ASAV2@2020-04-07T15:47:28.879110: starting check SSH INBOUND (3/5)
ASAV2@2020-04-07T15:47:28.879136: completed check SSH INBOUND (3/5)
ASAV2@2020-04-07T15:47:28.879146: starting check PING OUTBOUND (4/5)
ASAV2@2020-04-07T15:47:28.879169: completed check PING OUTBOUND (4/5)
ASAV2@2020-04-07T15:47:28.879179: starting check L2TP OUTBOUND (5/5)
ASAV2@2020-04-07T15:47:28.879202: completed check L2TP OUTBOUND (5/5)
head -n 5 outputs/*
==> outputs/result.csv <==
host,id,proto,icmp type,icmp code,src_ip,src_port,dst_ip,dst_port,in_intf,out_intf,action,drop_reason,success
ASAV1,DNS OUTBOUND,udp,,,192.0.2.2,5000,8.8.8.8,53,UNKNOWN,UNKNOWN,ALLOW,,True
ASAV1,HTTPS OUTBOUND,tcp,,,192.0.2.2,5000,20.0.0.1,443,UNKNOWN,UNKNOWN,ALLOW,,True
ASAV1,SSH INBOUND,tcp,,,fc00:172:31:1::a,5000,fc00:192:0:2::2,22,UNKNOWN,UNKNOWN,DROP,dummy,True
ASAV1,PING OUTBOUND,icmp,8,0,192.0.2.2,,8.8.8.8,,UNKNOWN,UNKNOWN,ALLOW,,True

==> outputs/result.json <==
{
  "ASAV1": {
    "DNS OUTBOUND": {
      "Phase": [
        {

==> outputs/result.txt <==
ASAV1 DNS OUTBOUND -> PASS
ASAV1 HTTPS OUTBOUND -> PASS
ASAV1 SSH INBOUND -> PASS
ASAV1 PING OUTBOUND -> PASS
ASAV1 L2TP OUTBOUND -> PASS
Completed dryruns

OK, so now we have a way to regression test an entire project, but it still requires manual human effort as part of a synchronous process: typing make, waiting for completion, observing results, and taking follow-on actions as needed. If your testing takes more than a few seconds, waiting will get old fast. A better solution would be automatically starting these tests whenever your code changes, then recording the results for review later. Put another way, when I type git push, I want to walk away with certainty that my updates will be tested. This is called “Continuous Integration” or CI, and is very easy to setup. There are plenty of solutions available: Gitlab CI, GitHub Actions (new), Circle CI, Jenkins, and many more. I’m a fan of Travis CI, and that’s what I’ve used for narc. Almost all of these solutions use a YAML file that defines the sequence in which test phases are executed. Below is the .travis.yml file from the project in question. The install phase installs all packages in the requirements.txt file using pip, and subsequence phases run various make targets.

$ cat .travis.yml
---
language: "python"
python:
  - "3.7"

# Install python packages for ansible and linters.
install:
  - "pip install -r requirements.txt"

# Perform pre-checks
before_script:
  - "make lint"
  - "make unit"

# Perform runbook testing with mock ASA inputs.
script:
  - "make dry"
...

Assuming you’ve set up Travis correctly (outside of scope for this blog), you’ll see your results in the web interface which clearly show each testing phase and the final results.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides

And that, my friends, is how you build a professional code project!

Related Posts

0 comments:

Post a Comment