Merge pull request #1040 from antmicro/fix-sphinx-xref-links

Fixing documentation cross-reference links
This commit is contained in:
Tim Ansell 2019-09-25 10:29:07 -07:00 committed by GitHub
commit d78b50af8b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 170 additions and 323 deletions

View File

@ -34,7 +34,7 @@ build:
# Run tests of code.
# ------------------------
TEST_EXCLUDE = $(foreach x,$(ALL_EXCLUDE) fuzzers minitests experiments,--ignore $(x))
TEST_EXCLUDE = $(foreach x,$(ALL_EXCLUDE) docs fuzzers minitests experiments,--ignore $(x))
test: test-py test-cpp
@true

View File

@ -8,7 +8,7 @@ The reason for using the `docs` branch is to avoid running the full CI test suit
Updating the docs is a three-step process: Make your updates, test your updates,
send a pull request.
# 1. Make your updates
## 1. Make your updates
The standard Project X-Ray [contribution guidelines](CONTRIBUTING.md) apply to
doc updates too.
@ -16,18 +16,18 @@ doc updates too.
Follow your usual process for updating content on GitHub. See GitHub's guide to
[working with forks](https://help.github.com/articles/working-with-forks/).
# 2. Test your updates
## 2. Test your updates
Before sending a pull request with your doc updates, you need to check the
effects of your changes on the page you've updated and on the docs as a whole.
## Check your markup
### Check your markup
There are a few places on the web where you can view ReStructured Text rendered
as HTML. For example:
[https://livesphinx.herokuapp.com/](https://livesphinx.herokuapp.com/)
## Perform basic tests: make html and linkcheck
### Perform basic tests: make html and linkcheck
If your changes are quite simple, you can perform a few basic checks before
sending a pull request. At minimum:
@ -85,7 +85,7 @@ Steps in detail, on Linux:
1. To leave the shell, type: `exit`.
## Perform more comprehensive testing on your own staging doc site
### Perform more comprehensive testing on your own staging doc site
If your changes are more comprehensive, you should do a full test of your fork
of the docs before sending a pull request to the Project X-Ray repo. You can
@ -125,7 +125,7 @@ Follow these steps to create your own staging doc site on Read the Docs (RtD):
guide](https://docs.readthedocs.io/en/latest/getting_started.html#import-docs)
for more info.
# 3. Send a pull request
## 3. Send a pull request
Follow your standard GitHub process to send a pull request to the Project X-Ray
repo. See the GitHub guide to [creating a pull request from a

View File

@ -35,9 +35,6 @@ fuzzers-links:
ln -s $$i/README.md $${n}.md; \
else \
echo "Missing $$i/README.md"; \
echo "# $$n Fuzzer" > $${n}.md; \
echo "" >> $${n}.md; \
echo "Missing README.md!" >> $${n}.md; \
fi; \
done
@ -55,9 +52,6 @@ minitests-links:
ln -s $$i/README.md $${n}.md; \
else \
echo "Missing $$i/README.md"; \
echo "# $$n Minitest" > $${n}.md; \
echo "" >> $${n}.md; \
echo "Missing README.md!" >> $${n}.md; \
fi; \
done

View File

@ -0,0 +1 @@
../../CODE_OF_CONDUCT.md

View File

@ -0,0 +1 @@
../../COPYING

View File

@ -0,0 +1 @@
../../UPDATING-THE-DOCS.md

View File

@ -24,7 +24,7 @@ import recommonmark
import os
import sys
sys.path.insert(0, os.path.abspath('.'))
from markdown_code_symlinks import MarkdownCodeSymlinks
from markdown_code_symlinks import LinkParser, MarkdownSymlinksDomain
# -- General configuration ------------------------------------------------
@ -52,7 +52,7 @@ templates_path = ['_templates']
# You can specify multiple suffix as a list of string:
source_suffix = ['.rst', '.md']
source_parsers = {
'.md': 'recommonmark.parser.CommonMarkParser',
'.md': 'markdown_code_symlinks.LinkParser',
}
# The master toctree document.
@ -196,9 +196,17 @@ intersphinx_mapping = {'https://docs.python.org/': None}
def setup(app):
MarkdownCodeSymlinks.find_links()
github_code_repo = 'https://github.com/SymbiFlow/prjxray/'
github_code_branch = 'blob/master/'
docs_root_dir = os.path.realpath(os.path.dirname(__file__))
code_root_dir = os.path.realpath(os.path.join(docs_root_dir, ".."))
MarkdownSymlinksDomain.init_domain(
github_code_repo, github_code_branch, docs_root_dir, code_root_dir)
MarkdownSymlinksDomain.find_links()
app.add_domain(MarkdownSymlinksDomain)
app.add_config_value(
'recommonmark_config', {
'github_code_repo': 'https://github.com/SymbiFlow/prjxray',
'github_code_repo': github_code_repo,
}, True)
app.add_transform(MarkdownCodeSymlinks)

View File

@ -0,0 +1 @@
../../CONTRIBUTING.md

View File

@ -0,0 +1 @@
../../../fuzzers/

View File

@ -0,0 +1,101 @@
Fuzzers
=======
Fuzzers are things that generate a design, feed it to Vivado, and look at the resulting bitstream to make some conclusion.
This is how the contents of the database are generated.
The general idea behind fuzzers is to pick some element in the device (say a block RAM or IOB) to target.
If you picked the IOB (no one is working on that yet), you'd write a design that is implemented in a specific IOB.
Then you'd create a program that creates variations of the design (called specimens) that vary the design parameters, for example, changing the configuration of a single pin.
A lot of this program is TCL that runs inside Vivado to change the design parameters, because it is a bit faster to load in one Verilog model and use TCL to replicate it with varying inputs instead of having different models and loading them individually.
By looking at all the resulting specimens, you can correlate which bits in which frame correspond to a particular choice in the design.
Looking at the implemented design in Vivado with "Show Routing Resources" turned on is quite helpful in understanding what all choices exist.
Configurable Logic Blocks (CLB)
-------------------------------
.. toctree::
:maxdepth: 1
:glob:
*clb*
Block RAM (BRAM)
----------------
.. toctree::
:maxdepth: 1
:glob:
*bram*
Input / Output (IOB)
--------------------
.. toctree::
:maxdepth: 1
:glob:
*iob*
Clocking (CMT, PLL, BUFG, etc)
------------------------------
.. toctree::
:maxdepth: 1
:glob:
*clk*
*cmt*
Programmable Interconnect Points (PIPs)
---------------------------------------
.. toctree::
:maxdepth: 1
:glob:
*int*
*pip*
Hard Block Fuzzers
------------------
.. toctree::
:maxdepth: 1
:glob:
*xadc
Grid and Wire
-------------
.. toctree::
:maxdepth: 1
:glob:
tilegrid
tileconn
ordered_wires
get_counts
dump_all
Timing
------
.. toctree::
:maxdepth: 1
:glob:
timing
All Fuzzers
-----------
.. toctree::
:maxdepth: 1
:glob:
*

View File

@ -1 +0,0 @@
../../../minitests/fixedpnr/README.md

View File

@ -0,0 +1 @@
../../../minitests/

View File

@ -0,0 +1,28 @@
Minitests
=========
Minitests are experiments to figure out how things work. They allow us to understand how to better write new fuzzers.
.. toctree::
:maxdepth: 1
:glob:
clb-bused
clb-carry_cin_cyinit
clb-configs
clb-muxf8
clkbuf
eccbits
fixedpnr
litex
lvb_long_mux
nodes_wires_list
partial_reconfig_flow
picorv32-v
picorv32-y
pip-switchboxes
roi_harness
srl
tiles_wires_pips
timing
util

View File

@ -1 +0,0 @@
../../../minitests/partial_reconfig_flow/README.md

View File

@ -1 +0,0 @@
../../../minitests/picorv32-v/README.md

View File

@ -1 +0,0 @@
../../../minitests/picorv32-y/README.md

View File

@ -1 +0,0 @@
../../../minitests/roi_harness/README.md

View File

@ -1,118 +1,3 @@
Fuzzers
=======
Fuzzers are things that generate a design, feed it to Vivado, and look at the resulting bitstream to make some conclusion.
This is how the contents of the database are generated.
The general idea behind fuzzers is to pick some element in the device (say a block RAM or IOB) to target.
If you picked the IOB (no one is working on that yet), you'd write a design that is implemented in a specific IOB.
Then you'd create a program that creates variations of the design (called specimens) that vary the design parameters, for example, changing the configuration of a single pin.
A lot of this program is TCL that runs inside Vivado to change the design parameters, because it is a bit faster to load in one Verilog model and use TCL to replicate it with varying inputs instead of having different models and loading them individually.
By looking at all the resulting specimens, you can correlate which bits in which frame correspond to a particular choice in the design.
Looking at the implemented design in Vivado with "Show Routing Resources" turned on is quite helpful in understanding what all choices exist.
Configurable Logic Blocks (CLB)
-------------------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*clb*
Block RAM (BRAM)
----------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*bram*
Input / Output (IOB)
--------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*iob*
Clocking (CMT, PLL, BUFG, etc)
------------------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*clk*
fuzzers/*cmt*
Programmable Interconnect Points (PIPs)
---------------------------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*int*
fuzzers/*pip*
Hard Block Fuzzers
------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*xadc
Grid and Wire
-------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/tilegrid
fuzzers/tileconn
fuzzers/ordered_wires
fuzzers/get_counts
fuzzers/dump_all
Timing
------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/timing
All Fuzzers
-----------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*
Minitests
=========
Minitests are experiments to figure out how things work. They allow us to understand how to better write new fuzzers.
.. toctree::
:maxdepth: 1
:caption: Current Minitests
:glob:
minitests/*
Tools
=====

View File

@ -24,12 +24,18 @@ to develop a free and open Verilog to bitstream toolchain for these devices.
architecture/dram_configuration
architecture/glossary
architecture/reference
architecture/code_of_conduct
architecture/updating_the_docs
architecture/copying
.. toctree::
:maxdepth: 2
:caption: Database Development Process
db_dev_process/readme
db_dev_process/contributing
db_dev_process/fuzzers/index
db_dev_process/minitests/index
db_dev_process/parts
.. toctree::

View File

@ -1,179 +0,0 @@
import logging
import os
from recommonmark import transform
"""
Allow linking of Markdown documentation from the source code tree into the Sphinx
documentation tree.
The Markdown documents will have links relative to the source code root, rather
than the place they are now linked too - this code fixes these paths up.
We also want links from two Markdown documents found in the Sphinx docs to
work, so that is also fixed up.
"""
def path_contains(parent_path, child_path):
"""Check a path contains another path.
>>> path_contains("a/b", "a/b")
True
>>> path_contains("a/b", "a/b/")
True
>>> path_contains("a/b", "a/b/c")
True
>>> path_contains("a/b", "c")
False
>>> path_contains("a/b", "c/b")
False
>>> path_contains("a/b", "c/../a/b/d")
True
>>> path_contains("../a/b", "../a/b/d")
True
>>> path_contains("../a/b", "../a/c")
False
>>> path_contains("a", "abc")
False
>>> path_contains("aa", "abc")
False
"""
# Append a separator to the end of both paths to work around the fact that
# os.path.commonprefix does character by character comparisons rather than
# path segment by path segment.
parent_path = os.path.join(os.path.normpath(parent_path), '')
child_path = os.path.join(os.path.normpath(child_path), '')
common_path = os.path.commonprefix((parent_path, child_path))
return common_path == parent_path
def relative(parent_dir, child_path):
"""Get the relative between a path that contains another path."""
child_dir = os.path.dirname(child_path)
assert path_contains(parent_dir, child_dir), "{} not inside {}".format(
child_path, parent_dir)
return os.path.relpath(child_path, start=parent_dir)
class MarkdownCodeSymlinks(transform.AutoStructify, object):
docs_root_dir = os.path.realpath(os.path.dirname(__file__))
code_root_dir = os.path.realpath(os.path.join(docs_root_dir, "..", ".."))
mapping = {
'docs2code': {},
'code2docs': {},
}
@classmethod
def relative_code(cls, url):
"""Get a value relative to the code directory."""
return relative(cls.code_root_dir, url)
@classmethod
def relative_docs(cls, url):
"""Get a value relative to the docs directory."""
return relative(cls.docs_root_dir, url)
@classmethod
def add_mapping(cls, docs_rel, code_rel):
assert docs_rel not in cls.mapping['docs2code'], """\
Assertion error! Document already in mapping!
New Value: {}
Current Value: {}
""".format(docs_rel, cls.mapping['docs2code'][docs_rel])
assert code_rel not in cls.mapping['code2docs'], """\
Assertion error! Document already in mapping!
New Value: {}
Current Value: {}
""".format(docs_rel, cls.mapping['code2docs'][code_rel])
cls.mapping['docs2code'][docs_rel] = code_rel
cls.mapping['code2docs'][code_rel] = docs_rel
@classmethod
def find_links(cls):
"""Walk the docs dir and find links to docs in the code dir."""
for root, dirs, files in os.walk(cls.docs_root_dir):
for fname in files:
fpath = os.path.abspath(os.path.join(root, fname))
if not os.path.islink(fpath):
continue
link_path = os.path.join(root, os.readlink(fpath))
# Is link outside the code directory?
if not path_contains(cls.code_root_dir, link_path):
continue
# Is link internal to the docs directory?
if path_contains(cls.docs_root_dir, link_path):
continue
docs_rel = cls.relative_docs(fpath)
code_rel = cls.relative_code(link_path)
cls.add_mapping(docs_rel, code_rel)
import pprint
pprint.pprint(cls.mapping)
@property
def url_resolver(self):
return self._url_resolver
@url_resolver.setter
def url_resolver(self, value):
print(self, value)
# Resolve a link from one markdown to another document.
def _url_resolver(self, ourl):
"""Resolve a URL found in a markdown file."""
assert self.docs_root_dir == os.path.realpath(self.root_dir), """\
Configuration error! Document Root != Current Root
Document Root: {}
Current Root: {}
""".format(self.docs_root_dir, self.root_dir)
src_path = os.path.abspath(self.document['source'])
src_dir = os.path.dirname(src_path)
dst_path = os.path.abspath(os.path.join(self.docs_root_dir, ourl))
dst_rsrc = os.path.relpath(dst_path, start=src_dir)
src_rdoc = self.relative_docs(src_path)
print
print("url_resolver")
print(src_path)
print(dst_path)
print(dst_rsrc)
print(src_rdoc)
# Is the source document a linked one?
if src_rdoc not in self.mapping['docs2code']:
# Don't do any rewriting on non-linked markdown.
url = ourl
# Is the destination also inside docs?
elif dst_rsrc not in self.mapping['code2docs']:
# Return a path to the GitHub repo.
url = "{}/blob/master/{}".format(
self.config['github_code_repo'], dst_rsrc)
else:
url = os.path.relpath(
os.path.join(
self.docs_root_dir, self.mapping['code2docs'][dst_rsrc]),
start=src_dir)
base_url, ext = os.path.splitext(url)
assert ext in (".md",
".markdown"), ("Unknown extension {}".format(ext))
url = "{}.html".format(base_url)
print("---")
print(ourl)
print(url)
print
return url
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -7,3 +7,6 @@ recommonmark
sphinx-markdown-tables
git+https://github.com/SymbiFlow/sphinx_symbiflow_theme
sphinxcontrib-napoleon
# Markdown cross-reference solver library
git+https://github.com/SymbiFlow/sphinxcontrib-markdown-symlinks

View File

@ -10,7 +10,7 @@ harness into a bitstream with fasm2frame and xc7patch. Since writting FASM is
rather tedious, rules are provided to convert Verilog ROI designs into FASM via
Vivado.
# Usage
## Usage
make rules are provided for generating each step of the process so that
intermediate forms can be analyzed. Assuming you have a .fasm file, invoking
@ -20,7 +20,7 @@ the %\_hand\_crafted.bit rule will generate a merged bitstream:
make foo.hand\_crafted.bit # reads foo.fasm
```
# Using Vivado to generate .fasm
## Using Vivado to generate .fasm
Vivado's Partial Reconfiguration flow can be used to synthesize and implement a
ROI design that is then converted to .fasm. Write a Verilog module

View File

@ -81,7 +81,7 @@ The following configurations are supported;
# 125 MHz CLK onboard
K17
# Quickstart
## Quickstart
```
source settings/artix7.sh
@ -91,7 +91,7 @@ make clean
make copy
```
# How it works
## How it works
Basic idea:
- LOC LUTs in the ROI to terminate input and output routing

View File

@ -13,14 +13,14 @@ comparision between the reduced model implemented in prjxray and the Vivado
timing results.
Model quality
=============
-------------
The prjxray timing handles most nets +/- 1.5% delay. The large exception to
this is clock nets, which appear to use a table lookup that is not understood
at this time.
Running the model
=================
-----------------
The provided Makefile will by default compile all examples. It a specific design
family is desired, the family name can be provided. If a specific design within