Skip to content

Conversation

@themperek
Copy link
Contributor

@themperek themperek commented Jul 20, 2021

Python-based runner based on https://github.com/themperek/cocotb-test/blob/master/cocotb_test/simulator.py see also: cocotb-test slides

This is just something we can start with.

Trying to follow: https://github.com/cocotb/cocotb/wiki/Python-Test-Runner-Proposal

Issues:

  • some simulators (GHDL, Verilator, maybe also Icarus) require top HDL module name for build/compilations
  • some simulators (Verilator, maybe Icarus) require wave for build/compilations

@themperek themperek added type:feature new or enhanced functionality category:codebase:tests (regression_manager) regarding the code for automating test runs labels Jul 20, 2021
@themperek themperek mentioned this pull request Jul 20, 2021
@codecov
Copy link

codecov bot commented Jul 20, 2021

Codecov Report

Merging #2634 (a1db296) into master (ed781b1) will decrease coverage by 1.07%.
The diff coverage is 53.38%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2634      +/-   ##
==========================================
- Coverage   72.80%   71.72%   -1.08%     
==========================================
  Files          46       47       +1     
  Lines        8081     8435     +354     
  Branches     1360     1453      +93     
==========================================
+ Hits         5883     6050     +167     
- Misses       1807     1971     +164     
- Partials      391      414      +23     
Impacted Files Coverage Δ
cocotb/runner.py 53.38% <53.38%> (ø)
cocotb/share/lib/vhpi/VhpiImpl.cpp 53.56% <0.00%> (-2.70%) ⬇️
cocotb/share/lib/vhpi/VhpiCbHdl.cpp 56.43% <0.00%> (-1.12%) ⬇️
cocotb/share/lib/simulator/simulatormodule.cpp 74.72% <0.00%> (-0.22%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ed781b1...a1db296. Read the comment docs.

@ktbarrett
Copy link
Member

Should add the new files to the coverage omit list

@eric-wieser
Copy link
Member

Shouldn't we instead be measuring coverage for them?

@ktbarrett
Copy link
Member

@eric-wieser What value would that add and is it worth the effort? Do we intend on getting coverage on simulators we don't have access to via mocking, or do we plan on excluding them? cocotb/config.py is excluded, should it instead be tested by the same logic because it contains code used in regressions? We didn't get coverage on the makefiles, is that simply because we couldn't, or because we didn't care?

I don't particularly care about the outcome, I just want consistency and to not have to constantly evaluate our position on every change.

How about we make a discussion to discuss changing our position and follow up by recording that position in CONTRIBUTING.md and potentially removing cocotb/config.py from the exclusion list, depending on the outcome of that discussion?

@ktbarrett
Copy link
Member

I also believe we should support running tests from the CLI and not just pytest.

@alexforencich
Copy link
Contributor

I have some ideas for doing that in a "nice" way, I'll work on putting together a toy example in the next couple of weeks when I have some time that supports running from both pytest and from the command line

@ktbarrett
Copy link
Member

@alexforencich One way I thought of was loading a Python module as a configuration object. Declare your project's sources, toplevel, etc. in that module and specify the module on the command line. Command line args would take precedence.

@alexforencich
Copy link
Contributor

My plan is to create a "testbench" object that manages parameters, calling run(), and has entry points for main() that does command line argument parsing as well as an entry point from python that can be exposed to pytest with decorators, etc.

@ktbarrett
Copy link
Member

@marlonjames marlonjames added this to the 1.6 milestone Aug 13, 2021
@marlonjames marlonjames mentioned this pull request Aug 18, 2021
@themperek themperek force-pushed the runtime branch 2 times, most recently from defd812 to 5df4887 Compare August 23, 2021 11:16
@themperek
Copy link
Contributor Author

I have updated mostly according to https://github.com/cocotb/cocotb/wiki/Python-Test-Runner-Proposal .

Issues/discrepancies:

  • some simulators (GHDL, Verilator, maybe also Icarus) require top HDL module name for build/compilations
  • some simulators (Verilator, maybe Icarus) require wave for build/compilations

No idea why Questa-VHDL is failing. It is ok with my setup.

@themperek themperek force-pushed the runtime branch 3 times, most recently from eae485f to d3b1ec8 Compare August 23, 2021 12:31
+ self.verilog_sources
)

cmd.append(["make", "-C", self.build_dir, "-f", "Vtop.mk"])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in verilator you can set a few options during this make command like:

make OPT_SLOW="-O3" OPT_FAST="-O3" VM_PARALLEL_BUILDS=1 -j8 -C sim_build/ -f Vtop.mk

Is there a possibility of exposing these to the user ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For verilator, one can add an extra argument to test like make_args. It will break compatibility but ... 🤷🏻

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how it will break compatibility? Other simulators would just ignore or throw a warning if the argument is given.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see how it will break compatibility? Other simulators would just ignore or throw a warning if the argument is given.

We can make_args to all simulators Simulator class or only for Verilator class. It will break if we do Verilator class.

Copy link
Member

@ktbarrett ktbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a fan of this implementation. It's essentially using the builder pattern which makes the class fragile due to coupling between the methods. This looks more like a Project class and less like a Simulator class. Those two concepts need to be decoupled, which is why I suggested the model types in the proposal.

@themperek
Copy link
Contributor Author

Not a fan of this implementation. It's essentially using the builder pattern which makes the class fragile due to coupling between the methods. This looks more like a Project class and less like a Simulator class. Those two concepts need to be decoupled, which is why I suggested the model types in the proposal.

@ktbarrett Can you sketch how user API could look like?

@ktbarrett
Copy link
Member

ktbarrett commented Aug 26, 2021

@themperek

I think the main interface should look like:

import os
from dataclasses import dataclass
from typing import Tuple, Union, Optional, List, Dict

Path = Union[str, "os.PathLike[str]"]


@dataclass
class Library:
    """
    Represents a compiled library

    Should be sufficient to allow users to define IP cores as pre-compiled libraries.
    Should be defined to allow the library to be used in further library or design builds.
    Should be defined so the same library can be used in *multiple* builds.
    The implementation is simulator-dependent.
    Should work in simulators that do library builds at the same time as designs by
    simply passing the paths through.
    Should work with simulators that express built libraries as object or as directories
    by abstracting away those specifics.
    """


@dataclass
class Design:
    """
    Represents a compiled library

    Should be sufficient to feed to the test phase.
    The implementation is simulator-dependent.
    """


class Simulator(object):
    """
    Defines tasks to run on designs

    Should encapsulate resources necessary to run the simulator *only*:
    * path to executable
    * path to license file
    """

    def build_lib(
        self,
        name: str,
        sources: List[Path] = [],
        libraries: List[Library] = [],
        includes: List[Path] = [],
        defines: List[str] = [],
        parameters: Dict[str, str] = {},
        extra_args: List[str] = [],
        build_dir: Optional[Path] = None,
        env: Optional[Dict[str, str]] = None,
    ) -> Library:
        """
        Builds a library

        Builds library of *name* with *sources*.
        *sources* can be any kind of files the compiler can understand.
        Sets *defines*, *includes*, *parameters*, and simulator-specific *extra_args*
        during the build.
        Links *libraries*.
        The build is done in *build_dir* in the *env* environment.
        If *build_dir* is ``None`` an appropriate build directory is chosen to prevent
        collision with other builds.
        If *env* is ``None``, the callers environment is used.
        Environment isolation can be done by passing an empty dictionary ``{}``.
        """

    def build(
        self,
        top: Optional[str] = None,
        sources: List[Path] = [],
        libraries: List[Library] = [],
        includes: List[Path] = [],
        defines: List[str] = [],
        parameters: Dict[str, str] = {},
        extra_args: Union[str, List[str]] = "",
        build_dir: Optional[Path] = None,
        env: Optional[Dict[str, str]] = None,
        work_lib_name: Optional[str] = None,
    ) -> Design:
        """
        Builds a design

        Builds design with toplevel entity *top* with the given *sources*.
        *sources* can be any kind of files the compiler can understand.
        Sets *defines*, *includes*, *parameters*, and simulator-specific *extra_args*
        during the build.
        Links *libraries*.
        The build is done in *build_dir* in the *env* environment.
        If *build_dir* is ``None``, an appropriate build directory is chosen to prevent
        collision with other builds.
        If *env* is ``None``, the callers environment is used.
        Environment isolation can be done by passing an empty dictionary ``{}``.
        """

    def test(
        self,
        design: Design,
        module: Union[str, List[Path]] = [],
        top: Optional[str] = None,
        toplevel_lang: Optional[str] = None,
        plusargs: List[str] = [],
        parameters: Dict[str, str] = {},
        extra_args: Union[str, List[str]] = "",
        testcase: Union[None, str, List[str]] = None,
        seed: Optional[int] = None,
        waves: bool = False,
        gui: bool = False,
        run_dir: Optional[Path] = None,
        env: Optional[Dict[str, str]] = None,
    ) -> Tuple[Path, int]:
        """
        Runs cocotb tests

        Runs tests on the *design* using the test *module*.
        Passes *plusargs*, *parameters*, and simulator-specific *extra_args* to the
        simulator.
        Optionally allows you to set several common switches like
        * testcase
        * seed
        * waves
        * gui
        Runs the simulation in *run_dir* in the *env* environment.
        If *run_dut* is ``None``, the current directory is used.
        If *env* is ``None``, the callers environment is used.
        Environment isolation can be done by passing an empty dictionary ``{}``.
        """

With an example usage for the simple_dff looking something like

import os
import pathlib

from cocotb.project import get_simulator_class

toplevel_lang = os.getenv("TOPLEVEL_LANG", "verilog")
sim = os.getenv("SIM", "icarus")
pwd = pathlib(__file__)

if toplevel_lang == "verilog":
    sources = [
        pwd / "dff.sv"
    ]
else:
    sources = [
        pwd / "dff.vhdl"
    ]

simulator = get_simulator_class(sim)()
compiled_design = simulator.build(top="dff", sources=sources)
results_xml, rc = simulator.test(
    design=compiled_design,
    module="test_dff",
    toplevel_lang=toplevel_lang
)

It might also be nice to return to the user not just the result.xml path, but the results as an object that could be easily checked with assert. I think this would play really well with a unittest-style pytest, where the build is done as a setup action.

@alexforencich
Copy link
Contributor

What about returning a test results object (could be a namedtuple or something) that contains the result and results.xml path, possibly among other things?

Also, how are parameters/generics specified? Some simulators want those during build (and changing them requires re-building), others require them during the simulator run (and therefore can be changed without re-building). How can this be handled in a simulator-independent way? i.e. is there a clean way to support that without having to do X if the simulator is A, but Y if the simulator is B, etc.

@ktbarrett
Copy link
Member

ktbarrett commented Jan 25, 2022

Well that didn't quite go as planned. So what exactly is build_dir and what exactly is work_dir? I would presume the point of separate build step from test step is to build once and then test many, potentially even in parallel. So there needs to be a build dir and a sim_dir. Builds are done in the build dir and sims are done in the sim dir. I'll revert and push up a proposal (ktbarrett@e199f2e).

@themperek
Copy link
Contributor Author

themperek commented Jan 25, 2022

Well that didn't quite go as planned. So what exactly is build_dir and what exactly is work_dir? I would presume the point of separate build step from test step is to build once and then test many, potentially even in parallel. So there needs to be a build dir and a sim_dir. Builds are done in the build dir and sims are done in the sim dir. I'll revert and push up a proposal (ktbarrett@e199f2e).

I think it is OK how you propose. The original intention was to be able to execute in a different place but maybe like now is better.

After you add this I think will add one more test where we separate build and test and see how this works. Something like.

@pytest.fixture(scope="module")
def runner():

    runner = get_runner(sim)()

    runner.build(
    )

    return runner

def test_runner(runner):

    runner.test(
    )

Or even separate completely.

@olofk
Copy link
Contributor

olofk commented Jan 25, 2022

Admittedly late to the party but have you considered using Edalize for interfacing the EDA tools instead? That would save you from maintaining code for that. Or if there are some reasons against that, perhaps at least use the EDAM API for your runner implementations. That would allow for easy interoperability with other EDAM-speaking tools like FuseSoC, Edalize and TerosHDL

@ktbarrett
Copy link
Member

@olofk We are avoiding depending on an outside tool in this repo. I'd rather have integration into multiple other toolchains done out of repo. There were a couple attempts at integrating cocotb support into edalize and FuseSoC, but they were deemed unsatisfactory and never completed. Using another tool's project models is an option. How does EDAM significantly differ from PyEDAA's Project Model? IIRC you and Unai were talking about converging the two?

@themperek
Copy link
Contributor Author

@olofk Few comments as far as I understand:

@olofk
Copy link
Contributor

olofk commented Jan 25, 2022

Fair enough. Totally understand the reluctance to depend on external projects. Hoping to make a new effort to work on cocotb integration from the Edalize side. IIUC the previous attempts have been a bit overcomplicated and it should pretty much just be to load cocotb as a VPI lib and set some env vars but I'll happily take up @themperek's offer for assistance since I'm still new to cocotb.

Re the EDAM API, it's just a data structure (could be python dicts/lists, yaml, json or whatever) that describes the design in terms of things like design files (verilog, vhdl etc), TCL scripts, parameters/generics, defines, VPI libs, toplevel(s) as well as tool-specific options. Kind of the same information that I have seen being put into cocotb makefiles. The idea is to have something mostly tool-agnostic that can then be turned into tool-specific scripts or project files (which is what Edalize does basically)

@themperek
Copy link
Contributor Author

@ktbarrett Maybe we can try to get this in as it is and then work with smaller PRs on this?

Copy link
Member

@ktbarrett ktbarrett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are definitely some follow on tasks.

  • Better handle clean
  • Support test parallelism by refactoring the class to not use the builder pattern
  • Support EDAM and PyEDAA project models

But what is here does work, so yeah I think it can come in. I'd leave it open for a couple days to let any maintainers add some last minute comments. @cocotb/maintainers

@ktbarrett
Copy link
Member

I think this can come in now.

@themperek themperek force-pushed the runtime branch 2 times, most recently from 062bec1 to 585b08b Compare January 29, 2022 11:43
Copy link
Contributor

@cmarqu cmarqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found these two, but fine otherwise for now.

Copy link
Contributor

@cmarqu cmarqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, some more small things :)

Co-authored-by: Colin Marquardt <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category:codebase:tests (regression_manager) regarding the code for automating test runs type:feature new or enhanced functionality

Projects

None yet

Development

Successfully merging this pull request may close these issues.