fixing documentation cross-reference links

Signed-off-by: Alessandro Comodi <acomodi@antmicro.com>
This commit is contained in:
Alessandro Comodi 2019-09-02 16:54:43 +02:00
parent a7a3431ceb
commit d0ab539f25
14 changed files with 273 additions and 189 deletions

View File

@ -130,7 +130,7 @@ do.
There are also "minitests" which are designs which can be viewed by a human in There are also "minitests" which are designs which can be viewed by a human in
Vivado to better understand how to generate more useful designs. Vivado to better understand how to generate more useful designs.
### [Experiments](experiments) ### [Experiments](https://github.com/SymbiFlow/prjxray/blob/master/experiments)
Experiments are like "minitests" except are only useful for a short period of Experiments are like "minitests" except are only useful for a short period of
time. Files are committed here to allow people to see how we are trying to time. Files are committed here to allow people to see how we are trying to
@ -146,21 +146,21 @@ Fuzzers are the scripts which generate the large number of bitstream.
They are called "fuzzers" because they follow an approach similar to the They are called "fuzzers" because they follow an approach similar to the
[idea of software testing through fuzzing](https://en.wikipedia.org/wiki/Fuzzing). [idea of software testing through fuzzing](https://en.wikipedia.org/wiki/Fuzzing).
### [Tools](tools) & [Libs](libs) ### [Tools](https://github.com/SymbiFlow/prjxray/blob/master/tools) & [Libs](https://github.com/SymbiFlow/prjxray/blob/master/libs)
Tools & libs are useful tools (and libraries) for converting the resulting Tools & libs are useful tools (and libraries) for converting the resulting
bitstreams into various formats. bitstreams into various formats.
Binaries in the tools directory are considered more mature and stable then Binaries in the tools directory are considered more mature and stable then
those in the [utils](utils) directory and could be actively used in other those in the [utils](https://github.com/SymbiFlow/prjxray/blob/master/utils) directory and could be actively used in other
projects. projects.
### [Utils](utils) ### [Utils](https://github.com/SymbiFlow/prjxray/blob/master/utils)
Utils are various tools which are still highly experimental. These tools should Utils are various tools which are still highly experimental. These tools should
only be used inside this repository. only be used inside this repository.
### [Third Party](third_party) ### [Third Party](https://github.com/SymbiFlow/prjxray/blob/master/third_party)
Third party contains code not developed as part of Project X-Ray. Third party contains code not developed as part of Project X-Ray.
@ -168,7 +168,7 @@ Third party contains code not developed as part of Project X-Ray.
# Database # Database
Running the all fuzzers in order will produce a database which documents the Running the all fuzzers in order will produce a database which documents the
bitstream format in the [database](database) directory. bitstream format in the [database](https://github.com/SymbiFlow/prjxray/blob/master/database) directory.
As running all these fuzzers can take significant time, As running all these fuzzers can take significant time,
[Tim 'mithro' Ansell <me@mith.ro>](https://github.com/mithro) has graciously [Tim 'mithro' Ansell <me@mith.ro>](https://github.com/mithro) has graciously

View File

@ -8,7 +8,7 @@ The reason for using the `docs` branch is to avoid running the full CI test suit
Updating the docs is a three-step process: Make your updates, test your updates, Updating the docs is a three-step process: Make your updates, test your updates,
send a pull request. send a pull request.
# 1. Make your updates ## 1. Make your updates
The standard Project X-Ray [contribution guidelines](CONTRIBUTING.md) apply to The standard Project X-Ray [contribution guidelines](CONTRIBUTING.md) apply to
doc updates too. doc updates too.
@ -16,18 +16,18 @@ doc updates too.
Follow your usual process for updating content on GitHub. See GitHub's guide to Follow your usual process for updating content on GitHub. See GitHub's guide to
[working with forks](https://help.github.com/articles/working-with-forks/). [working with forks](https://help.github.com/articles/working-with-forks/).
# 2. Test your updates ## 2. Test your updates
Before sending a pull request with your doc updates, you need to check the Before sending a pull request with your doc updates, you need to check the
effects of your changes on the page you've updated and on the docs as a whole. effects of your changes on the page you've updated and on the docs as a whole.
## Check your markup ### Check your markup
There are a few places on the web where you can view ReStructured Text rendered There are a few places on the web where you can view ReStructured Text rendered
as HTML. For example: as HTML. For example:
[https://livesphinx.herokuapp.com/](https://livesphinx.herokuapp.com/) [https://livesphinx.herokuapp.com/](https://livesphinx.herokuapp.com/)
## Perform basic tests: make html and linkcheck ### Perform basic tests: make html and linkcheck
If your changes are quite simple, you can perform a few basic checks before If your changes are quite simple, you can perform a few basic checks before
sending a pull request. At minimum: sending a pull request. At minimum:
@ -85,7 +85,7 @@ Steps in detail, on Linux:
1. To leave the shell, type: `exit`. 1. To leave the shell, type: `exit`.
## Perform more comprehensive testing on your own staging doc site ### Perform more comprehensive testing on your own staging doc site
If your changes are more comprehensive, you should do a full test of your fork If your changes are more comprehensive, you should do a full test of your fork
of the docs before sending a pull request to the Project X-Ray repo. You can of the docs before sending a pull request to the Project X-Ray repo. You can
@ -125,7 +125,7 @@ Follow these steps to create your own staging doc site on Read the Docs (RtD):
guide](https://docs.readthedocs.io/en/latest/getting_started.html#import-docs) guide](https://docs.readthedocs.io/en/latest/getting_started.html#import-docs)
for more info. for more info.
# 3. Send a pull request ## 3. Send a pull request
Follow your standard GitHub process to send a pull request to the Project X-Ray Follow your standard GitHub process to send a pull request to the Project X-Ray
repo. See the GitHub guide to [creating a pull request from a repo. See the GitHub guide to [creating a pull request from a

View File

@ -0,0 +1 @@
../../CODE_OF_CONDUCT.md

View File

@ -0,0 +1 @@
../../COPYING

View File

@ -0,0 +1 @@
../../UPDATING-THE-DOCS.md

View File

@ -24,7 +24,7 @@ import recommonmark
import os import os
import sys import sys
sys.path.insert(0, os.path.abspath('.')) sys.path.insert(0, os.path.abspath('.'))
from markdown_code_symlinks import MarkdownCodeSymlinks from markdown_code_symlinks import LinkParser, PrjxrayDomain
# -- General configuration ------------------------------------------------ # -- General configuration ------------------------------------------------
@ -52,7 +52,7 @@ templates_path = ['_templates']
# You can specify multiple suffix as a list of string: # You can specify multiple suffix as a list of string:
source_suffix = ['.rst', '.md'] source_suffix = ['.rst', '.md']
source_parsers = { source_parsers = {
'.md': 'recommonmark.parser.CommonMarkParser', '.md': 'markdown_code_symlinks.LinkParser',
} }
# The master toctree document. # The master toctree document.
@ -196,9 +196,9 @@ intersphinx_mapping = {'https://docs.python.org/': None}
def setup(app): def setup(app):
MarkdownCodeSymlinks.find_links() PrjxrayDomain.find_links()
app.add_domain(PrjxrayDomain)
app.add_config_value( app.add_config_value(
'recommonmark_config', { 'recommonmark_config', {
'github_code_repo': 'https://github.com/SymbiFlow/prjxray', 'github_code_repo': 'https://github.com/SymbiFlow/prjxray',
}, True) }, True)
app.add_transform(MarkdownCodeSymlinks)

View File

@ -0,0 +1 @@
../../CONTRIBUTING.md

View File

@ -0,0 +1 @@
../../../fuzzers

View File

@ -0,0 +1,101 @@
Fuzzers
=======
Fuzzers are things that generate a design, feed it to Vivado, and look at the resulting bitstream to make some conclusion.
This is how the contents of the database are generated.
The general idea behind fuzzers is to pick some element in the device (say a block RAM or IOB) to target.
If you picked the IOB (no one is working on that yet), you'd write a design that is implemented in a specific IOB.
Then you'd create a program that creates variations of the design (called specimens) that vary the design parameters, for example, changing the configuration of a single pin.
A lot of this program is TCL that runs inside Vivado to change the design parameters, because it is a bit faster to load in one Verilog model and use TCL to replicate it with varying inputs instead of having different models and loading them individually.
By looking at all the resulting specimens, you can correlate which bits in which frame correspond to a particular choice in the design.
Looking at the implemented design in Vivado with "Show Routing Resources" turned on is quite helpful in understanding what all choices exist.
Configurable Logic Blocks (CLB)
-------------------------------
.. toctree::
:maxdepth: 1
:glob:
*clb*
Block RAM (BRAM)
----------------
.. toctree::
:maxdepth: 1
:glob:
*bram*
Input / Output (IOB)
--------------------
.. toctree::
:maxdepth: 1
:glob:
*iob*
Clocking (CMT, PLL, BUFG, etc)
------------------------------
.. toctree::
:maxdepth: 1
:glob:
*clk*
*cmt*
Programmable Interconnect Points (PIPs)
---------------------------------------
.. toctree::
:maxdepth: 1
:glob:
*int*
*pip*
Hard Block Fuzzers
------------------
.. toctree::
:maxdepth: 1
:glob:
*xadc
Grid and Wire
-------------
.. toctree::
:maxdepth: 1
:glob:
tilegrid
tileconn
ordered_wires
get_counts
dump_all
Timing
------
.. toctree::
:maxdepth: 1
:glob:
timing
All Fuzzers
-----------
.. toctree::
:maxdepth: 1
:glob:
*

View File

@ -0,0 +1 @@
../../../minitests

View File

@ -0,0 +1,29 @@
Minitests
=========
Minitests are experiments to figure out how things work. They allow us to understand how to better write new fuzzers.
.. toctree::
:maxdepth: 1
:glob:
clb-bused
clb-carry_cin_cyinit
clb-configs
clb-muxf8
clkbuf
eccbits
fixedpnr
index.rst
litex
lvb_long_mux
nodes_wires_list
partial_reconfig_flow
picorv32-v
picorv32-y
pip-switchboxes
roi_harness
srl
tiles_wires_pips
timing
util

View File

@ -1,118 +1,3 @@
Fuzzers
=======
Fuzzers are things that generate a design, feed it to Vivado, and look at the resulting bitstream to make some conclusion.
This is how the contents of the database are generated.
The general idea behind fuzzers is to pick some element in the device (say a block RAM or IOB) to target.
If you picked the IOB (no one is working on that yet), you'd write a design that is implemented in a specific IOB.
Then you'd create a program that creates variations of the design (called specimens) that vary the design parameters, for example, changing the configuration of a single pin.
A lot of this program is TCL that runs inside Vivado to change the design parameters, because it is a bit faster to load in one Verilog model and use TCL to replicate it with varying inputs instead of having different models and loading them individually.
By looking at all the resulting specimens, you can correlate which bits in which frame correspond to a particular choice in the design.
Looking at the implemented design in Vivado with "Show Routing Resources" turned on is quite helpful in understanding what all choices exist.
Configurable Logic Blocks (CLB)
-------------------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*clb*
Block RAM (BRAM)
----------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*bram*
Input / Output (IOB)
--------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*iob*
Clocking (CMT, PLL, BUFG, etc)
------------------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*clk*
fuzzers/*cmt*
Programmable Interconnect Points (PIPs)
---------------------------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*int*
fuzzers/*pip*
Hard Block Fuzzers
------------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*xadc
Grid and Wire
-------------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/tilegrid
fuzzers/tileconn
fuzzers/ordered_wires
fuzzers/get_counts
fuzzers/dump_all
Timing
------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/timing
All Fuzzers
-----------
.. toctree::
:maxdepth: 1
:glob:
fuzzers/*
Minitests
=========
Minitests are experiments to figure out how things work. They allow us to understand how to better write new fuzzers.
.. toctree::
:maxdepth: 1
:caption: Current Minitests
:glob:
minitests/*
Tools Tools
===== =====

View File

@ -24,12 +24,18 @@ to develop a free and open Verilog to bitstream toolchain for these devices.
architecture/dram_configuration architecture/dram_configuration
architecture/glossary architecture/glossary
architecture/reference architecture/reference
architecture/code_of_conduct
architecture/updating_the_docs
architecture/copying
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Database Development Process :caption: Database Development Process
db_dev_process/readme db_dev_process/readme
db_dev_process/contributing
db_dev_process/fuzzers/index
db_dev_process/minitests/index
db_dev_process/parts db_dev_process/parts
.. toctree:: .. toctree::

View File

@ -1,7 +1,19 @@
import logging import logging
import os import os, sys
from os.path import splitext
from docutils import nodes
from sphinx import addnodes
from sphinx.roles import XRefRole
from sphinx.domains import Domain
from sphinx.util.nodes import make_refnode
if sys.version_info < (3, 0):
from urlparse import urlparse, unquote
else:
from urllib.parse import urlparse, unquote
from recommonmark import parser
from recommonmark import transform
""" """
Allow linking of Markdown documentation from the source code tree into the Sphinx Allow linking of Markdown documentation from the source code tree into the Sphinx
documentation tree. documentation tree.
@ -13,7 +25,6 @@ We also want links from two Markdown documents found in the Sphinx docs to
work, so that is also fixed up. work, so that is also fixed up.
""" """
def path_contains(parent_path, child_path): def path_contains(parent_path, child_path):
"""Check a path contains another path. """Check a path contains another path.
@ -55,9 +66,21 @@ def relative(parent_dir, child_path):
return os.path.relpath(child_path, start=parent_dir) return os.path.relpath(child_path, start=parent_dir)
class MarkdownCodeSymlinks(transform.AutoStructify, object): class PrjxrayDomain(Domain):
"""
Extension of the Domain class to implement custom cross-reference links
solve methodology
"""
name = 'prjxray'
label = 'Prjxray'
roles = {
'xref': XRefRole(),
}
docs_root_dir = os.path.realpath(os.path.dirname(__file__)) docs_root_dir = os.path.realpath(os.path.dirname(__file__))
code_root_dir = os.path.realpath(os.path.join(docs_root_dir, "..", "..")) code_root_dir = os.path.realpath(os.path.join(docs_root_dir, ".."))
mapping = { mapping = {
'docs2code': {}, 'docs2code': {},
@ -81,6 +104,7 @@ Assertion error! Document already in mapping!
New Value: {} New Value: {}
Current Value: {} Current Value: {}
""".format(docs_rel, cls.mapping['docs2code'][docs_rel]) """.format(docs_rel, cls.mapping['docs2code'][docs_rel])
assert code_rel not in cls.mapping['code2docs'], """\ assert code_rel not in cls.mapping['code2docs'], """\
Assertion error! Document already in mapping! Assertion error! Document already in mapping!
New Value: {} New Value: {}
@ -94,6 +118,26 @@ Current Value: {}
def find_links(cls): def find_links(cls):
"""Walk the docs dir and find links to docs in the code dir.""" """Walk the docs dir and find links to docs in the code dir."""
for root, dirs, files in os.walk(cls.docs_root_dir): for root, dirs, files in os.walk(cls.docs_root_dir):
for dname in dirs:
dpath = os.path.abspath(os.path.join(root, dname))
if not os.path.islink(dpath):
continue
link_path = os.path.join(root, os.readlink(dpath))
# Is link outside the code directory?
if not path_contains(cls.code_root_dir, link_path):
continue
# Is link internal to the docs directory?
if path_contains(cls.docs_root_dir, link_path):
continue
docs_rel = cls.relative_docs(dpath)
code_rel = cls.relative_code(link_path)
cls.add_mapping(docs_rel, code_rel)
for fname in files: for fname in files:
fpath = os.path.abspath(os.path.join(root, fname)) fpath = os.path.abspath(os.path.join(root, fname))
@ -113,66 +157,79 @@ Current Value: {}
code_rel = cls.relative_code(link_path) code_rel = cls.relative_code(link_path)
cls.add_mapping(docs_rel, code_rel) cls.add_mapping(docs_rel, code_rel)
import pprint import pprint
pprint.pprint(cls.mapping) pprint.pprint(cls.mapping)
@property @classmethod
def url_resolver(self): def remove_extension(cls, path):
return self._url_resolver return filename
@url_resolver.setter # Overriden method to solve the cross-reference link
def url_resolver(self, value): def resolve_xref(self, env, fromdocname, builder,
print(self, value) typ, target, node, contnode):
if '#' in target:
# Resolve a link from one markdown to another document. todocname, targetid = target.split('#')
def _url_resolver(self, ourl):
"""Resolve a URL found in a markdown file."""
assert self.docs_root_dir == os.path.realpath(self.root_dir), """\
Configuration error! Document Root != Current Root
Document Root: {}
Current Root: {}
""".format(self.docs_root_dir, self.root_dir)
src_path = os.path.abspath(self.document['source'])
src_dir = os.path.dirname(src_path)
dst_path = os.path.abspath(os.path.join(self.docs_root_dir, ourl))
dst_rsrc = os.path.relpath(dst_path, start=src_dir)
src_rdoc = self.relative_docs(src_path)
print
print("url_resolver")
print(src_path)
print(dst_path)
print(dst_rsrc)
print(src_rdoc)
# Is the source document a linked one?
if src_rdoc not in self.mapping['docs2code']:
# Don't do any rewriting on non-linked markdown.
url = ourl
# Is the destination also inside docs?
elif dst_rsrc not in self.mapping['code2docs']:
# Return a path to the GitHub repo.
url = "{}/blob/master/{}".format(
self.config['github_code_repo'], dst_rsrc)
else: else:
url = os.path.relpath( todocname = target
os.path.join( targetid = ''
self.docs_root_dir, self.mapping['code2docs'][dst_rsrc]),
start=src_dir)
base_url, ext = os.path.splitext(url)
assert ext in (".md",
".markdown"), ("Unknown extension {}".format(ext))
url = "{}.html".format(base_url)
print("---") # Removing filename extension (e.g. contributing.md -> contributing)
print(ourl) todocname, _ = os.path.splitext(self.mapping['code2docs'][todocname])
print(url)
print
return url
newnode = make_refnode(builder, fromdocname, todocname, targetid, contnode[0])
print(newnode)
return newnode
def resolve_any_xref(self, env, fromdocname, builder,
target, node, contnode):
res = self.resolve_xref(env, fromdocname, builder, 'xref', target, node, contnode)
return [('prjxray:xref', res)]
class LinkParser(parser.CommonMarkParser, object):
def visit_link(self, mdnode):
ref_node = nodes.reference()
destination = mdnode.destination
ref_node.line = self._get_line(mdnode)
if mdnode.title:
ref_node['title'] = mdnode.title
url_check = urlparse(destination)
# If there's not a url scheme (e.g. 'https' for 'https:...' links),
# or there is a scheme but it's not in the list of known_url_schemes,
# then assume it's a cross-reference and pass it to Sphinx as an `:any:` ref.
known_url_schemes = self.config.get('known_url_schemes')
if known_url_schemes:
scheme_known = url_check.scheme in known_url_schemes
else:
scheme_known = bool(url_check.scheme)
is_dest_xref = url_check.fragment and destination[:1] != '#'
ref_node['refuri'] = destination
next_node = ref_node
# Adds pending-xref wrapper to unsolvable cross-references
if (is_dest_xref or not url_check.fragment) and not scheme_known:
wrap_node = addnodes.pending_xref(
reftarget=unquote(destination),
reftype='xref',
refdomain='prjxray', # Added to enable cross-linking
refexplicit=True,
refwarn=True
)
# TODO also not correct sourcepos
wrap_node.line = self._get_line(mdnode)
if mdnode.title:
wrap_node['title'] = mdnode.title
wrap_node.append(ref_node)
next_node = wrap_node
self.current_node.append(next_node)
self.current_node = ref_node
if __name__ == "__main__": if __name__ == "__main__":
import doctest import doctest