mirror of https://github.com/VLSIDA/OpenRAM.git
Merge branch 'dev' into delay_ctrl
This commit is contained in:
commit
2f5d3b6faf
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**Version**
|
||||
Which commit are you using?
|
||||
|
||||
**To Reproduce**
|
||||
What did you do to demonstrate the bug?
|
||||
Please include your configuration file used.
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Logs**
|
||||
If applicable, add logs or output to help explain your problem.
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
|
|
@ -0,0 +1,108 @@
|
|||
# How do the workflows work?
|
||||
|
||||
1. When there is a push to the private repo's 'dev' branch (private/dev),
|
||||
`regress` workflow runs the regression tests if the commit is not versioned.
|
||||
`sync` workflow runs and makes sure that the versioned commit has a tag if it is
|
||||
versioned. See [important notes](#important-notes) to see what "versioned
|
||||
commit" means.
|
||||
|
||||
1. If `regress` workflow fails on 'private/dev', `sync` workflow gets triggered
|
||||
and it pushes the latest changes to the public repo's 'dev' branch (public/dev).
|
||||
|
||||
1. If `regress` workflow successfully passes on 'private/dev', `version`
|
||||
workflow gets triggered. It creates a new version commit and tag, and pushes to
|
||||
'private/dev', 'public/dev', and 'public/stable'.
|
||||
|
||||
1. When there is a push with new version to the 'public/stable' branch, `deploy`
|
||||
workflow runs. It deploys the PyPI package of OpenRAM and creates a new GitHub
|
||||
release on that repo.
|
||||
|
||||
|
||||
|
||||
## Important Notes
|
||||
|
||||
1. Workflows understand that the latest commit is versioned with the following
|
||||
commit message syntax.
|
||||
|
||||
```
|
||||
Bump version: <any message>
|
||||
```
|
||||
|
||||
Automatically generated version commits have the following syntax:
|
||||
|
||||
```
|
||||
Bump version: a.b.c -> a.b.d
|
||||
```
|
||||
|
||||
1. `version` workflow only increments the right-most version digit. Other digits
|
||||
in the version number must be updated manually, following the syntax above. Just
|
||||
following this syntax is enough for workflows to create a new version
|
||||
automatically. That means, you don't have to tag that commit manually.
|
||||
|
||||
1. `regress` workflow doesn't run if the push has a new version. We assume that
|
||||
this commit was automatically generated after a previous commit passed `regress`
|
||||
workflow or was manually generated with caution.
|
||||
|
||||
1. `regress` workflow doesn't run on the public repo.
|
||||
|
||||
1. `deploy` workflow only runs on branches named 'stable'.
|
||||
|
||||
1. `version` workflow is only triggered from branches named 'dev' if they pass
|
||||
`regress` workflow.
|
||||
|
||||
1. `sync` workflow only runs on the private repo.
|
||||
|
||||
1. `sync_tag` workflow only runs on the private repo.
|
||||
|
||||
1. Merging pull requests on the private repo should be safe in any case. They
|
||||
are treated the same as commit pushes.
|
||||
|
||||
> **Warning**: `regress` workflow is currently disabled on the public repo
|
||||
> manually. This was done because of a security risk on our private server.
|
||||
> Enabling it on GitHub will run `regress` workflow on the public repo.
|
||||
|
||||
|
||||
## Flowchart
|
||||
```mermaid
|
||||
flowchart TD
|
||||
|
||||
start((Start));
|
||||
privatedev[(PrivateRAM/dev)];
|
||||
publicdev[(OpenRAM/dev)];
|
||||
publicstable[(OpenRAM/stable)];
|
||||
regressprivate{{regress}};
|
||||
regresspublic{{regress}};
|
||||
syncnover{{sync}};
|
||||
synctag{{sync_tag}};
|
||||
deploy{{deploy}};
|
||||
versionprivate{{version}};
|
||||
versionpublic{{version}};
|
||||
|
||||
privateif1(Is versioned?);
|
||||
privateif2(Has version tag?);
|
||||
privateif3(Did tests pass?);
|
||||
|
||||
publicif1(Is versioned?);
|
||||
publicif2(Is versioned?);
|
||||
publicif3(Did tests pass?)
|
||||
|
||||
start-- Push commit -->privatedev
|
||||
privatedev-->privateif1
|
||||
privateif1-- Yes -->privateif2
|
||||
privateif2-- No -->synctag
|
||||
privateif1-- No -->regressprivate
|
||||
regressprivate-->privateif3
|
||||
privateif3-- Yes -->versionprivate
|
||||
privateif3-- No -->syncnover
|
||||
|
||||
start-- Push commit / Merge PR -->publicdev
|
||||
publicdev-->publicif1
|
||||
publicif1-- No -->regresspublic
|
||||
regresspublic-->publicif3
|
||||
publicif3-- Yes -->versionpublic
|
||||
|
||||
start-- "Push commit (from workflows)" -->publicstable
|
||||
publicstable-->publicif2
|
||||
publicif2-- Yes -->deploy
|
||||
```
|
||||
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
name: ci
|
||||
on: [push]
|
||||
jobs:
|
||||
regress:
|
||||
runs-on: self-hosted
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v1
|
||||
- name: Docker build
|
||||
run: |
|
||||
cd ${{ github.workspace }}/docker
|
||||
make build
|
||||
- name: PDK Install
|
||||
run: |
|
||||
export OPENRAM_HOME="${{ github.workspace }}/compiler"
|
||||
export OPENRAM_TECH="${{ github.workspace }}/technology"
|
||||
#cd $OPENRAM_HOME/tests
|
||||
#export PDK_ROOT="${{ github.workspace }}/pdk"
|
||||
#make pdk
|
||||
#make install
|
||||
- name: Regress
|
||||
run: |
|
||||
export OPENRAM_HOME="${{ github.workspace }}/compiler"
|
||||
export OPENRAM_TECH="${{ github.workspace }}/technology"
|
||||
export FREEPDK45="~/FreePDK45"
|
||||
#cd $OPENRAM_HOME/.. && make pdk && make install
|
||||
#export OPENRAM_TMP="${{ github.workspace }}/scn4me_subm_temp"
|
||||
#python3-coverage run -p $OPENRAM_HOME/tests/regress.py -j 12 -t scn4m_subm
|
||||
#$OPENRAM_HOME/tests/regress.py -j 24 -t scn4m_subm
|
||||
cd $OPENRAM_HOME/tests
|
||||
make clean
|
||||
make -k -j 48
|
||||
- name: Archive
|
||||
if: ${{ failure() }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: Regress Archives
|
||||
path: ${{ github.workspace }}/compiler/tests/results/*
|
||||
|
|
@ -0,0 +1,54 @@
|
|||
name: deploy
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- stable
|
||||
jobs:
|
||||
# This job upload the Python library to PyPI
|
||||
deploy_pip:
|
||||
if: ${{ startsWith(github.event.head_commit.message, 'Bump version:') }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v1
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: '3.8'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python3 -m pip install virtualenv
|
||||
- name: Build Python package
|
||||
run: |
|
||||
make build_library
|
||||
- name: Upload package to PyPI
|
||||
uses: pypa/gh-action-pypi-publish@release/v1
|
||||
with:
|
||||
password: ${{ secrets.PYPI_API_TOKEN }}
|
||||
# This job creates a new GitHub release
|
||||
github_release:
|
||||
if: ${{ startsWith(github.event.head_commit.message, 'Bump version:') }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
- name: Create a release
|
||||
run: |
|
||||
# Find the last two commits
|
||||
export LATEST_TAG="$(git describe --tags --abbrev=0)"
|
||||
export PREVIOUS_TAG="$(git describe --tags --abbrev=0 $(git rev-list --tags --skip=1 --max-count=1))"
|
||||
# Write release notes to a file
|
||||
touch release_notes.txt
|
||||
printf "## Install\nPython package is available at [PyPI](https://pypi.org/project/openram/).\n" >> release_notes.txt
|
||||
printf "## Documentation\nDocumentation is available [here](https://github.com/VLSIDA/OpenRAM/blob/stable/docs/source/index.md).\n" >> release_notes.txt
|
||||
printf "## Changes\n" >> release_notes.txt
|
||||
printf "Full changelog: https://github.com/VLSIDA/OpenRAM/compare/${PREVIOUS_TAG}...${LATEST_TAG}\n" >> release_notes.txt
|
||||
printf "## Contributors\n" >> release_notes.txt
|
||||
printf "$(git log --pretty='format:+ %an' ${LATEST_TAG}...${PREVIOUS_TAG} | sort -u)\n" >> release_notes.txt
|
||||
# Create the release via GitHub CLI
|
||||
gh release create ${LATEST_TAG} --verify-tag -F release_notes.txt
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
name: regress
|
||||
on:
|
||||
push:
|
||||
branches-ignore:
|
||||
- stable
|
||||
jobs:
|
||||
# All tests should be run from this job
|
||||
regression_test:
|
||||
# This job runs on pull requests or any push that doesn't have a new version
|
||||
if: ${{ startsWith(github.event.head_commit.message, 'Bump version:') == false }}
|
||||
runs-on: self-hosted
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v1
|
||||
- name: Library build
|
||||
run: |
|
||||
rm -rf ~/.local/lib/python3.8/site-packages/openram*
|
||||
make library
|
||||
- name: Build conda
|
||||
run: |
|
||||
./install_conda.sh
|
||||
- name: PDK Install
|
||||
run: |
|
||||
export OPENRAM_HOME="${{ github.workspace }}/compiler"
|
||||
export OPENRAM_TECH="${{ github.workspace }}/technology"
|
||||
export PDK_ROOT="${{ github.workspace }}/pdk"
|
||||
make pdk
|
||||
make install
|
||||
- name: Regress
|
||||
run: |
|
||||
export OPENRAM_HOME="${{ github.workspace }}/compiler"
|
||||
export OPENRAM_TECH="${{ github.workspace }}/technology"
|
||||
export PDK_ROOT="${{ github.workspace }}/pdk"
|
||||
export FREEPDK45="~/FreePDK45"
|
||||
# KLAYOUT_PATH breaks klayout installation. Unset it for now...
|
||||
unset KLAYOUT_PATH
|
||||
#cd $OPENRAM_HOME/.. && make pdk && make install
|
||||
#export OPENRAM_TMP="${{ github.workspace }}/scn4me_subm_temp"
|
||||
#python3-coverage run -p $OPENRAM_HOME/tests/regress.py -j 12 -t scn4m_subm
|
||||
#$OPENRAM_HOME/tests/regress.py -j 24 -t scn4m_subm
|
||||
cd $OPENRAM_HOME/tests
|
||||
make clean
|
||||
make -k -j 48
|
||||
- name: Archive
|
||||
if: ${{ failure() }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: Regress Archives
|
||||
path: ${{ github.workspace }}/compiler/tests/results/*
|
||||
# This job triggers sync.yml workflow
|
||||
sync_trigger:
|
||||
if: ${{ always() && github.ref_name == 'dev' && github.repository == 'VLSIDA/PrivateRAM' && needs.regression_test.result == 'failure' }}
|
||||
needs: regression_test
|
||||
uses: ./.github/workflows/sync.yml
|
||||
secrets:
|
||||
WORKFLOW_ACCESS_TOKEN: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
# This job triggers version.yml workflow
|
||||
version_trigger:
|
||||
if: ${{ github.ref_name == 'dev' }}
|
||||
needs: regression_test
|
||||
uses: ./.github/workflows/version.yml
|
||||
secrets:
|
||||
WORKFLOW_ACCESS_TOKEN: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
name: sync
|
||||
on:
|
||||
workflow_call:
|
||||
secrets:
|
||||
WORKFLOW_ACCESS_TOKEN:
|
||||
required: true
|
||||
jobs:
|
||||
# This job synchronizes the 'dev' branch of OpenRAM repo with the current branch
|
||||
sync_dev_no_version:
|
||||
if: ${{ github.repository == 'VLSIDA/PrivateRAM' }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
- name: Synchronize OpenRAM repo
|
||||
run: |
|
||||
# Configure pusher account
|
||||
git config --global user.name "vlsida-bot"
|
||||
git config --global user.email "mrg+vlsidabot@ucsc.edu"
|
||||
# Add remote repo
|
||||
git remote add public-repo https://${{ secrets.WORKFLOW_ACCESS_TOKEN }}@github.com/VLSIDA/OpenRAM.git
|
||||
git pull public-repo dev
|
||||
# Push the latest changes
|
||||
git push -u public-repo HEAD:dev
|
||||
|
|
@ -0,0 +1,42 @@
|
|||
name: sync_tag
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- dev
|
||||
jobs:
|
||||
# This job makes sure that a version commit has a tag (manually bumped versions might have tags missing)
|
||||
sync_dev_tag_check:
|
||||
if: ${{ github.repository == 'VLSIDA/PrivateRAM' && startsWith(github.event.head_commit.message, 'Bump version:') }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
- name: Compare version and tag
|
||||
run: |
|
||||
# Configure pusher account
|
||||
git config --global user.name "vlsida-bot"
|
||||
git config --global user.email "mrg+vlsidabot@ucsc.edu"
|
||||
# Add both repos
|
||||
git remote add private-repo https://${{ secrets.WORKFLOW_ACCESS_TOKEN }}@github.com/VLSIDA/PrivateRAM.git
|
||||
git remote add public-repo https://${{ secrets.WORKFLOW_ACCESS_TOKEN }}@github.com/VLSIDA/OpenRAM.git
|
||||
# Read the version file
|
||||
echo "LATEST_VERSION=v$(cat VERSION)" >> $GITHUB_ENV
|
||||
# Read the tag name of the last commit
|
||||
echo "HEAD_TAG=$(git describe --tags HEAD)" >> $GITHUB_ENV
|
||||
- name: Make a new tag and push
|
||||
if: ${{ env.LATEST_VERSION != env.HEAD_TAG }}
|
||||
run: |
|
||||
# Tag the commit
|
||||
git tag ${{ env.LATEST_VERSION }} HEAD
|
||||
# Push to private/dev
|
||||
git pull private-repo dev
|
||||
git push private-repo HEAD:dev ${{ env.LATEST_VERSION }}
|
||||
# Push to public-repo/dev
|
||||
git pull public-repo dev
|
||||
git push public-repo HEAD:dev ${{ env.LATEST_VERSION }}
|
||||
# Push to public/stable
|
||||
git pull public-repo stable
|
||||
git push public-repo HEAD:stable ${{ env.LATEST_VERSION }}
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
name: version
|
||||
on:
|
||||
workflow_call:
|
||||
secrets:
|
||||
WORKFLOW_ACCESS_TOKEN:
|
||||
required: true
|
||||
jobs:
|
||||
make_version:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.WORKFLOW_ACCESS_TOKEN }}
|
||||
- name: Configure git
|
||||
run: |
|
||||
# Configure the committer
|
||||
git config --global user.name "vlsida-bot"
|
||||
git config --global user.email "mrg+vlsidabot@ucsc.edu"
|
||||
# Set remote repos
|
||||
git remote add private-repo https://${{ secrets.WORKFLOW_ACCESS_TOKEN }}@github.com/VLSIDA/PrivateRAM.git
|
||||
git remote add public-repo https://${{ secrets.WORKFLOW_ACCESS_TOKEN }}@github.com/VLSIDA/OpenRAM.git
|
||||
- name: Make new version number
|
||||
run: |
|
||||
# Read the current version number
|
||||
export CURRENT_VERSION="$(cat VERSION)"
|
||||
# Increment the version number
|
||||
export NEXT_VERSION="$(echo ${CURRENT_VERSION} | awk -F. -v OFS=. '{$NF += 1 ; print}')"
|
||||
echo "${NEXT_VERSION}" > VERSION
|
||||
# Commit the change and tag the commit
|
||||
git commit -a -m "Bump version: ${CURRENT_VERSION} -> ${NEXT_VERSION}"
|
||||
git tag "v${NEXT_VERSION}" HEAD
|
||||
- name: Push changes
|
||||
run: |
|
||||
# Read next tag
|
||||
export NEXT_TAG="v$(cat VERSION)"
|
||||
# Push to private/dev
|
||||
git pull private-repo dev
|
||||
git push private-repo HEAD:dev ${NEXT_TAG}
|
||||
# Push to public/dev
|
||||
git pull public-repo dev
|
||||
git push public-repo HEAD:dev ${NEXT_TAG}
|
||||
# Push to public/stable
|
||||
git pull public-repo stable
|
||||
git push public-repo HEAD:stable ${NEXT_TAG}
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
.DS_Store
|
||||
.coverage*
|
||||
*~
|
||||
*.orig
|
||||
*.rej
|
||||
|
|
@ -13,3 +14,12 @@ technology/freepdk45/ncsu_basekit
|
|||
technology/sky130/*_lib
|
||||
technology/sky130/tech/.magicrc
|
||||
.idea
|
||||
compiler/tests/results/
|
||||
open_pdks/
|
||||
dist/
|
||||
openram.egg-info/
|
||||
miniconda/
|
||||
sky130A/
|
||||
sky130B/
|
||||
skywater-pdk/
|
||||
sky130_fd_bd_sram/
|
||||
|
|
|
|||
117
HINTS.md
117
HINTS.md
|
|
@ -1,117 +0,0 @@
|
|||
# Debugging
|
||||
|
||||
When OpenRAM runs, it puts files in a temporary directory that is
|
||||
shown in the banner at the top. Like:
|
||||
```
|
||||
/tmp/openram_mrg_18128_temp/
|
||||
```
|
||||
This is where simulations and DRC/LVS get run so there is no network
|
||||
traffic. The directory name is unique for each person and run of
|
||||
OpenRAM to not clobber any files and allow simultaneous runs. If it
|
||||
passes, the files are deleted. If it fails, you will see these files:
|
||||
+ temp.gds is the layout (.mag files too if using SCMOS)
|
||||
+ temp.sp is the netlist
|
||||
+ test1.drc.err is the std err output of the DRC command
|
||||
+ test1.drc.out is the standard output of the DRC command
|
||||
+ test1.drc.results is the DRC results file
|
||||
+ test1.lvs.err is the std err output of the LVS command
|
||||
+ test1.lvs.out is the standard output of the LVS command
|
||||
+ test1.lvs.results is the DRC results file
|
||||
|
||||
Depending on your DRC/LVS tools, there will also be:
|
||||
+ \_calibreDRC.rul\_ is the DRC rule file (Calibre)
|
||||
+ dc_runset is the command file (Calibre)
|
||||
+ extracted.sp (Calibre)
|
||||
+ run_lvs.sh is a Netgen script for LVS (Netgen)
|
||||
+ run_drc.sh is a Magic script for DRC (Magic)
|
||||
+ <topcell>.spice (Magic)
|
||||
|
||||
If DRC/LVS fails, the first thing is to check if it ran in the .out and
|
||||
.err file. This shows the standard output and error output from
|
||||
running DRC/LVS. If there is a setup problem it will be shown here.
|
||||
|
||||
If DRC/LVS runs, but doesn't pass, you then should look at the .results
|
||||
file. If the DRC fails, it will typically show you the command that was used
|
||||
to run Calibre or Magic+Netgen.
|
||||
|
||||
To debug, you will need a layout viewer. I prefer to use Glade
|
||||
on my Mac, but you can also use Calibre, Magic, etc.
|
||||
|
||||
1. Klayout
|
||||
|
||||
You can view the designs in [Klayout](https://www.klayout.de/) with the configuration
|
||||
file provided in the tech directories. For example,
|
||||
```
|
||||
klayout temp.gds -l /home/vagrant/openram/technology/freepdk45/tf/FreePDK45.lyp
|
||||
```
|
||||
|
||||
2. Calibre
|
||||
|
||||
Start the Calibre DESIGNrev viewer in the temp directory and load your GDS file:
|
||||
```
|
||||
calibredrv temp.gds
|
||||
```
|
||||
Select Verification->Start RVE and select the results database file in
|
||||
the new form (e.g., test1.drc.db). This will start the RVE (results
|
||||
viewer). Scroll through the check pane and find the DRC check with an
|
||||
error. Select it and it will open some numbers to the right. Double
|
||||
click on any of the errors in the result browser. These will be
|
||||
labelled as numbers "1 2 3 4" for example will be 4 DRC errors.
|
||||
|
||||
In the viewer ">" opens the layout down a level.
|
||||
|
||||
3. Glade
|
||||
|
||||
You can view errors in [Glade](http://www.peardrop.co.uk/glade/) as well.
|
||||
|
||||
To remote display over X windows, you need to disable OpenGL acceleration or use vnc
|
||||
or something. You can disable by adding this to your .bashrc in bash:
|
||||
```
|
||||
export GLADE_USE_OPENGL=no
|
||||
```
|
||||
or in .cshrc/.tcshrc in csh/tcsh:
|
||||
```
|
||||
setenv GLADE_USE_OPENGAL no
|
||||
```
|
||||
To use this with the FreePDK45 or SCMOS layer views you should use the
|
||||
tech files. Then create a .glade.py file in your user directory with
|
||||
these commands to load the technology layers:
|
||||
```
|
||||
ui().importCds("default",
|
||||
"/Users/mrg/techfiles/freepdk45/display.drf",
|
||||
"/Users/mrg/techfiles/freepdk45/FreePDK45.tf", 1000, 1,
|
||||
"/Users/mrg/techfiles/freepdk45/layers.map")
|
||||
```
|
||||
Obviously, edit the paths to point to your directory. To switch
|
||||
between processes, you have to change the importCds command (or you
|
||||
can manually run the command each time you start glade).
|
||||
|
||||
To load the errors, you simply do Verify->Import Calibre Errors select
|
||||
the .results file from Calibre.
|
||||
|
||||
4. Magic
|
||||
|
||||
Magic is only supported in SCMOS. You will need to install the MOSIS SCMOS rules
|
||||
and [Magic](http://opencircuitdesign.com/)
|
||||
|
||||
When running DRC or extraction, OpenRAM will load the GDS file, save
|
||||
the .ext/.mag files, and export an extracted netlist (.spice).
|
||||
|
||||
5. It is possible to use other viewers as well, such as:
|
||||
* [LayoutEditor](http://www.layouteditor.net/)
|
||||
|
||||
|
||||
# Example to output/input .gds layout files from/to Cadence
|
||||
|
||||
1. To create your component layouts, you should stream them to
|
||||
individual gds files using our provided layermap and flatten
|
||||
cells. For example,
|
||||
```
|
||||
strmout -layerMap layers.map -library sram -topCell $i -view layout -flattenVias -flattenPcells -strmFile ../gds_lib/$i.gds
|
||||
```
|
||||
2. To stream a layout back into Cadence, do this:
|
||||
```
|
||||
strmin -layerMap layers.map -attachTechFileOfLib NCSU\_TechLib\_FreePDK45 -library sram_4_32 -strmFile sram_4_32.gds
|
||||
```
|
||||
When you import a gds file, make sure to attach the correct tech lib
|
||||
or you will get incorrect layers in the resulting library.
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
include Makefile
|
||||
include openram.mk
|
||||
include setpaths.sh
|
||||
include requirements.txt
|
||||
include install_conda.sh
|
||||
include docker/*
|
||||
recursive-include compiler *
|
||||
recursive-include technology *
|
||||
include VERSION
|
||||
exclude .DS_Store
|
||||
exclude .idea
|
||||
exclude **/model_data
|
||||
exclude technology/sky130/*_lib
|
||||
exclude technology/sky130/tech/.magicrc
|
||||
exclude compiler/gen_stimulus.py
|
||||
exclude compiler/model_data_util.py
|
||||
exclude compiler/printGDS.py
|
||||
exclude compiler/processGDS.py
|
||||
exclude compiler/uniquifyGDS.py
|
||||
exclude compiler/view_profile.py
|
||||
exclude compiler/run_profile.sh
|
||||
recursive-exclude open_pdks *
|
||||
recursive-exclude compiler/tests/results *
|
||||
recursive-exclude technology/freepdk45/ncsu_basekit *
|
||||
recursive-exclude outputs *
|
||||
global-exclude *.pyc *~ *.orig *.rej *.aux *.out *.toc *.synctex.gz
|
||||
55
Makefile
55
Makefile
|
|
@ -49,6 +49,9 @@ INSTALL_BASE_DIRS := gds_lib mag_lib sp_lib lvs_lib calibre_lvs_lib klayout_lvs_
|
|||
INSTALL_BASE := $(OPENRAM_HOME)/../technology/sky130
|
||||
INSTALL_DIRS := $(addprefix $(INSTALL_BASE)/,$(INSTALL_BASE_DIRS))
|
||||
|
||||
# If conda is installed, we will use Magic from there
|
||||
CONDA_DIR := $(wildcard $(TOP_DIR)/miniconda)
|
||||
|
||||
check-pdk-root:
|
||||
ifndef PDK_ROOT
|
||||
$(error PDK_ROOT is undefined, please export it before running make)
|
||||
|
|
@ -58,33 +61,42 @@ $(SKY130_PDKS_DIR): check-pdk-root
|
|||
@echo "Cloning skywater PDK..."
|
||||
@[ -d $(PDK_ROOT)/skywater-pdk ] || \
|
||||
git clone https://github.com/google/skywater-pdk.git $(PDK_ROOT)/skywater-pdk
|
||||
@cd $(SKY130_PDKS_DIR) && \
|
||||
git checkout main && git pull && \
|
||||
git checkout -qf $(SKY130_PDKS_GIT_COMMIT) && \
|
||||
git submodule update --init libraries/sky130_fd_pr/latest libraries/sky130_fd_sc_hd/latest
|
||||
@git -C $(SKY130_PDKS_DIR) checkout $(SKY130_PDKS_GIT_COMMIT) && \
|
||||
git -C $(SKY130_PDKS_DIR) submodule update --init libraries/sky130_fd_pr/latest libraries/sky130_fd_sc_hd/latest
|
||||
|
||||
$(OPEN_PDKS_DIR): $(SKY130_PDKS_DIR)
|
||||
@echo "Cloning open_pdks..."
|
||||
@[ -d $(OPEN_PDKS_DIR) ] || \
|
||||
git clone $(OPEN_PDKS_GIT_REPO) $(OPEN_PDKS_DIR)
|
||||
@cd $(OPEN_PDKS_DIR) && git pull && git checkout $(OPEN_PDKS_GIT_COMMIT)
|
||||
@git -C $(OPEN_PDKS_DIR) checkout $(OPEN_PDKS_GIT_COMMIT)
|
||||
|
||||
$(SKY130_PDK): $(OPEN_PDKS_DIR) $(SKY130_PDKS_DIR)
|
||||
@echo "Installing open_pdks..."
|
||||
$(DOCKER_CMD) sh -c ". /home/cad-user/.bashrc && cd /pdk/open_pdks && \
|
||||
./configure --enable-sky130-pdk=/pdk/skywater-pdk/libraries --with-sky130-local-path=/pdk && \
|
||||
cd sky130 && \
|
||||
make veryclean && \
|
||||
make && \
|
||||
make SHARED_PDKS_PATH=/pdk install"
|
||||
ifeq ($(CONDA_DIR),"")
|
||||
@cd $(PDK_ROOT)/open_pdks && \
|
||||
./configure --enable-sky130-pdk=$(PDK_ROOT)/skywater-pdk/libraries --with-sky130-local-path=$(PDK_ROOT) && \
|
||||
cd sky130 && \
|
||||
make veryclean && \
|
||||
make && \
|
||||
make SHARED_PDKS_PATH=$(PDK_ROOT) install
|
||||
else
|
||||
@source $(TOP_DIR)/miniconda/bin/activate && \
|
||||
cd $(PDK_ROOT)/open_pdks && \
|
||||
./configure --enable-sky130-pdk=$(PDK_ROOT)/skywater-pdk/libraries --with-sky130-local-path=$(PDK_ROOT) && \
|
||||
cd sky130 && \
|
||||
make veryclean && \
|
||||
make && \
|
||||
make SHARED_PDKS_PATH=$(PDK_ROOT) install && \
|
||||
conda deactivate
|
||||
endif
|
||||
|
||||
$(SRAM_LIB_DIR): check-pdk-root
|
||||
@echo "Cloning SRAM library..."
|
||||
@[ -d $(SRAM_LIB_DIR) ] || (\
|
||||
git clone $(SRAM_LIB_GIT_REPO) $(SRAM_LIB_DIR) && \
|
||||
cd $(SRAM_LIB_DIR) && git pull && git checkout $(SRAM_LIB_GIT_COMMIT))
|
||||
@[ -d $(SRAM_LIB_DIR) ] || \
|
||||
git clone $(SRAM_LIB_GIT_REPO) $(SRAM_LIB_DIR)
|
||||
@git -C $(SRAM_LIB_DIR) checkout $(SRAM_LIB_GIT_COMMIT)
|
||||
|
||||
install: $(SRAM_LIB_DIR) pdk
|
||||
install: $(SRAM_LIB_DIR)
|
||||
@[ -d $(PDK_ROOT)/sky130A ] || \
|
||||
(echo "Warning: $(PDK_ROOT)/sky130A not found!! Run make pdk first." && false)
|
||||
@[ -d $(PDK_ROOT)/skywater-pdk ] || \
|
||||
|
|
@ -215,3 +227,16 @@ wipe: uninstall
|
|||
@rm -rf $(OPEN_PDKS_DIR)
|
||||
@rm -rf $(SKY130_PDKS_DIR)
|
||||
.PHONY: wipe
|
||||
|
||||
# Build the openram library
|
||||
build_library:
|
||||
@rm -rf dist
|
||||
@rm -rf openram.egg-info
|
||||
@python3 -m pip install --upgrade build
|
||||
@python3 -m build
|
||||
.PHONY: build_library
|
||||
|
||||
# Build and install the openram library
|
||||
library: build_library
|
||||
@python3 -m pip install --force dist/openram*.whl
|
||||
.PHONY: library
|
||||
|
|
|
|||
27
PORTING.md
27
PORTING.md
|
|
@ -6,19 +6,24 @@ If you want to support a new technology, you will need to create:
|
|||
|
||||
We provide two technology examples for [SCMOS] and [FreePDK45]. Each
|
||||
specific technology (e.g., [FreePDK45]) should be a subdirectory
|
||||
(e.g., $OPENRAM_TECH/freepdk45) and include certain folders and files:
|
||||
* gds_lib folder with all the .gds (premade) library cells:
|
||||
* dff.gds
|
||||
* sense_amp.gds
|
||||
* write_driver.gds
|
||||
* cell_1rw.gds
|
||||
* replica\_cell\_1rw.gds
|
||||
* dummy\_cell\_1rw.gds
|
||||
* sp_lib folder with all the .sp (premade) library netlists for the above cells.
|
||||
* layers.map
|
||||
* A valid tech Python module (tech directory with \_\_init\_\_.py and tech.py) with:
|
||||
(e.g., `$OPENRAM_TECH/freepdk45`) and include certain folders and files:
|
||||
* `gds_lib` folder with all the `.gds` (premade) library cells:
|
||||
* `dff.gds`
|
||||
* `sense_amp.gds`
|
||||
* `write_driver.gds`
|
||||
* `cell_1rw.gds`
|
||||
* `replica\_cell\_1rw.gds`
|
||||
* `dummy\_cell\_1rw.gds`
|
||||
* `sp_lib` folder with all the `.sp` (premade) library netlists for the above cells.
|
||||
* `layers.map`
|
||||
* A valid tech Python module (tech directory with `__init__.py` and `tech.py`) with:
|
||||
* References in tech.py to spice models
|
||||
* DRC/LVS rules needed for dynamic cells and routing
|
||||
* Layer information
|
||||
* Spice and supply information
|
||||
* etc.
|
||||
|
||||
|
||||
|
||||
[FreePDK45]: https://www.eda.ncsu.edu/wiki/FreePDK45:Contents
|
||||
[SCMOS]: https://www.mosis.com/files/scmos/scmos.pdf
|
||||
|
|
|
|||
204
README.md
204
README.md
|
|
@ -1,15 +1,17 @@
|
|||

|
||||

|
||||
# OpenRAM
|
||||
|
||||
[](https://www.python.org/)
|
||||
[](./LICENSE)
|
||||
[](https://github.com/VLSIDA/OpenRAM/archive/stable.zip)
|
||||
[](https://github.com/VLSIDA/OpenRAM/archive/dev.zip)
|
||||
[](./LICENSE)
|
||||
[](https://pypi.org/project/openram/)
|
||||
[](https://githubtocolab.com/sfmth/openram-playground/blob/main/OpenRAM.ipynb)
|
||||
|
||||
An open-source static random access memory (SRAM) compiler.
|
||||
|
||||
|
||||
|
||||
# What is OpenRAM?
|
||||
<img align="right" width="25%" src="images/SCMOS_16kb_sram.jpg">
|
||||
<img align="right" width="25%" src="https://raw.githubusercontent.com/VLSIDA/OpenRAM/stable/images/SCMOS_16kb_sram.jpg">
|
||||
|
||||
OpenRAM is an award winning open-source Python framework to create the layout,
|
||||
netlists, timing and power models, placement and routing models, and
|
||||
|
|
@ -17,186 +19,54 @@ other views necessary to use SRAMs in ASIC design. OpenRAM supports
|
|||
integration in both commercial and open-source flows with both
|
||||
predictive and fabricable technologies.
|
||||
|
||||
|
||||
|
||||
# Documentation
|
||||
|
||||
Please take a look at our presentation We have created a detailed
|
||||
presentation that serves as our [documentation][documentation].
|
||||
This is the most up-to-date information, so please let us know if you see
|
||||
things that need to be fixed.
|
||||
|
||||
# Basic Setup
|
||||
|
||||
## Dependencies
|
||||
|
||||
Please see the Dockerfile for the required versions of tools.
|
||||
|
||||
In general, the OpenRAM compiler has very few dependencies:
|
||||
+ Docker
|
||||
+ Make
|
||||
+ Python 3.6 or higher
|
||||
+ Various Python packages (pip install -r requirements.txt)
|
||||
+ [Git]
|
||||
|
||||
## Docker
|
||||
|
||||
We have a [docker setup](./docker) to run OpenRAM. To use this, you should run:
|
||||
```
|
||||
cd openram/docker
|
||||
make build
|
||||
```
|
||||
This must be run once and will take a while to build all the tools.
|
||||
Please see our [documentation][documentation] and let us know if anything needs
|
||||
updating.
|
||||
|
||||
|
||||
## Environment
|
||||
|
||||
You must set two environment variables:
|
||||
+ OPENRAM\_HOME should point to the compiler source directory.
|
||||
+ OPENERAM\_TECH should point to one or more root technology directories (colon separated).
|
||||
|
||||
You should also add OPENRAM\_HOME to your PYTHONPATH.
|
||||
|
||||
For example add this to your .bashrc:
|
||||
|
||||
```
|
||||
export OPENRAM_HOME="$HOME/openram/compiler"
|
||||
export OPENRAM_TECH="$HOME/openram/technology"
|
||||
export PYTHONPATH=$OPENRAM_HOME
|
||||
```
|
||||
|
||||
Note that if you want symbols to resolve in your editor, you may also want to add the specific technology
|
||||
directory that you use and any custom technology modules as well. For example:
|
||||
```
|
||||
export PYTHONPATH="$OPENRAM_HOME:$OPENRAM_TECH/sky130:$OPENRAM_TECH/sky130/custom"
|
||||
```
|
||||
|
||||
We include the tech files necessary for [SCMOS] SCN4M_SUBM,
|
||||
[FreePDK45]. The [SCMOS] spice models, however, are
|
||||
generic and should be replaced with foundry models. You may get the
|
||||
entire [FreePDK45 PDK here][FreePDK45].
|
||||
|
||||
|
||||
### Sky130 Setup
|
||||
|
||||
To install [Sky130], you must have the open_pdks files installed in $PDK_ROOT.
|
||||
To install this automatically, you can run:
|
||||
|
||||
cd $HOME/openram
|
||||
make pdk
|
||||
|
||||
Then you must also install the [Sky130] SRAM build space and the appropriate cell views
|
||||
by running:
|
||||
|
||||
cd $HOME/openram
|
||||
make install
|
||||
|
||||
# Basic Usage
|
||||
|
||||
Once you have defined the environment, you can run OpenRAM from the command line
|
||||
using a single configuration file written in Python.
|
||||
|
||||
For example, create a file called *myconfig.py* specifying the following
|
||||
parameters for your memory:
|
||||
```
|
||||
# Data word size
|
||||
word_size = 2
|
||||
# Number of words in the memory
|
||||
num_words = 16
|
||||
|
||||
# Technology to use in $OPENRAM_TECH
|
||||
tech_name = "scn4m_subm"
|
||||
|
||||
# You can use the technology nominal corner only
|
||||
nominal_corner_only = True
|
||||
# Or you can specify particular corners
|
||||
# Process corners to characterize
|
||||
# process_corners = ["SS", "TT", "FF"]
|
||||
# Voltage corners to characterize
|
||||
# supply_voltages = [ 3.0, 3.3, 3.5 ]
|
||||
# Temperature corners to characterize
|
||||
# temperatures = [ 0, 25 100]
|
||||
|
||||
# Output directory for the results
|
||||
output_path = "temp"
|
||||
# Output file base name
|
||||
output_name = "sram_{0}_{1}_{2}".format(word_size,num_words,tech_name)
|
||||
|
||||
# Disable analytical models for full characterization (WARNING: slow!)
|
||||
# analytical_delay = False
|
||||
|
||||
```
|
||||
|
||||
You can then run OpenRAM by executing:
|
||||
```
|
||||
python3 $OPENRAM_HOME/openram.py myconfig
|
||||
```
|
||||
You can see all of the options for the configuration file in
|
||||
$OPENRAM\_HOME/options.py
|
||||
|
||||
To run designs in Docker, it is suggested to use, for example:
|
||||
```
|
||||
cd openram/macros
|
||||
make example_config_scn4m_subm
|
||||
```
|
||||
|
||||
# Unit Tests
|
||||
|
||||
Regression testing performs a number of tests for all modules in OpenRAM.
|
||||
From the unit test directory ($OPENRAM\_HOME/tests),
|
||||
use the following command to run all regression tests:
|
||||
|
||||
```
|
||||
cd openram/compiler/tests
|
||||
make -j 3
|
||||
```
|
||||
The -j can run with 3 threads. By default, this will run in all technologies.
|
||||
|
||||
To run a specific test in all technologies:
|
||||
```
|
||||
cd openram/compiler/tests
|
||||
make 05_bitcell_array_test
|
||||
```
|
||||
To run a specific technology:
|
||||
```
|
||||
cd openram/compiler/tests
|
||||
TECHS=scn4m_subm make 05_bitcell_array_test
|
||||
```
|
||||
|
||||
To increase the verbosity of the test, add one (or more) -v options and
|
||||
pass it as an argument to OpenRAM:
|
||||
```
|
||||
ARGS="-v" make 05_bitcell_array_test
|
||||
```
|
||||
|
||||
Unit test results are put in a directory:
|
||||
```
|
||||
openram/compiler/tests/results/<technology>/<test>
|
||||
```
|
||||
If the test fails, there will be a tmp directory with intermediate results.
|
||||
If the test passes, this directory will be deleted to save space.
|
||||
You can view the .out file to see what the output of a test is in either case.
|
||||
|
||||
# Get Involved
|
||||
|
||||
+ [Port it](./PORTING.md) to a new technology.
|
||||
+ Report bugs by submitting [Github issues].
|
||||
+ [Port it](./PORTING.md) to a new technology
|
||||
+ Report bugs by submitting [Github issues]
|
||||
+ Develop new features (see [how to contribute](./CONTRIBUTING.md))
|
||||
+ Submit code/fixes using a [Github pull request]
|
||||
+ Follow our [project][Github project].
|
||||
+ Follow our [project][Github project]
|
||||
+ Read and cite our [ICCAD paper][OpenRAMpaper]
|
||||
|
||||
|
||||
|
||||
# Further Help
|
||||
|
||||
+ [Additional hints](./HINTS.md)
|
||||
+ [Documentation][documentation]
|
||||
+ [OpenRAM Slack Workspace][Slack]
|
||||
+ [OpenRAM Users Group][user-group] ([subscribe here][user-group-subscribe])
|
||||
+ [OpenRAM Developers Group][dev-group] ([subscribe here][dev-group-subscribe])
|
||||
+ <a rel="me" href="https://fosstodon.org/@mrg">@mrg@fostodon.org</a>
|
||||
|
||||
|
||||
|
||||
# License
|
||||
|
||||
OpenRAM is licensed under the [BSD 3-clause License](./LICENSE).
|
||||
OpenRAM is licensed under the [BSD 3-Clause License](./LICENSE).
|
||||
|
||||
|
||||
|
||||
# Publications
|
||||
|
||||
+ [M. R. Guthaus, J. E. Stine, S. Ataei, B. Chen, B. Wu, M. Sarwar, "OpenRAM: An Open-Source Memory Compiler," Proceedings of the 35th International Conference on Computer-Aided Design (ICCAD), 2016.](https://escholarship.org/content/qt8x19c778/qt8x19c778_noSplash_b2b3fbbb57f1269f86d0de77865b0691.pdf)
|
||||
+ [S. Ataei, J. Stine, M. Guthaus, “A 64 kb differential single-port 12T SRAM design with a bit-interleaving scheme for low-voltage operation in 32 nm SOI CMOS,” International Conference on Computer Design (ICCD), 2016, pp. 499-506.](https://escholarship.org/uc/item/99f6q9c9)
|
||||
+ [E. Ebrahimi, M. Guthaus, J. Renau, “Timing Speculative SRAM”, IEEE International Symposium on Circuits and Systems (ISCAS), 2017.](https://escholarship.org/content/qt7nn0j5x3/qt7nn0j5x3_noSplash_172457455e1aceba20694c3d7aa489b4.pdf)
|
||||
+ [B. Wu, J.E. Stine, M.R. Guthaus, "Fast and Area-Efficient Word-Line Optimization", IEEE International Symposium on Circuits and Systems (ISCAS), 2019.](https://escholarship.org/content/qt98s4c1hp/qt98s4c1hp_noSplash_753dcc3e218f60aafff98ef77fb56384.pdf)
|
||||
+ [B. Wu, M. Guthaus, "Bottom Up Approach for High Speed SRAM Word-line Buffer Insertion Optimization", IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), 2019.](https://ieeexplore.ieee.org/document/8920325)
|
||||
+ [H. Nichols, M. Grimes, J. Sowash, J. Cirimelli-Low, M. Guthaus "Automated Synthesis of Multi-Port Memories and Control", IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), 2019.](https://escholarship.org/content/qt7047n3k0/qt7047n3k0.pdf?t=q4gcij)
|
||||
+ [H. Nichols, "Statistical Modeling of SRAMs", M.S. Thesis, UCSC, 2022.](https://escholarship.org/content/qt7vx9n089/qt7vx9n089_noSplash_cfc4ba479d8eb1b6ec25d7c92357bc18.pdf?t=ra9wzr)
|
||||
+ [M. Guthaus, H. Nichols, J. Cirimelli-Low, J. Kunzler, B. Wu, "Enabling Design Technology Co-Optimization of SRAMs though Open-Source Software", IEEE International Electron Devices Meeting (IEDM), 2020.](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9372047)
|
||||
|
||||
|
||||
|
||||
# Contributors & Acknowledgment
|
||||
|
||||
- [Matthew Guthaus] from [VLSIDA] created the OpenRAM project and is the lead architect.
|
||||
|
|
@ -205,7 +75,7 @@ OpenRAM is licensed under the [BSD 3-clause License](./LICENSE).
|
|||
|
||||
If I forgot to add you, please let me know!
|
||||
|
||||
* * *
|
||||
|
||||
|
||||
[Matthew Guthaus]: https://users.soe.ucsc.edu/~mrg
|
||||
[James Stine]: https://ece.okstate.edu/content/stine-james-e-jr-phd
|
||||
|
|
@ -215,9 +85,9 @@ If I forgot to add you, please let me know!
|
|||
|
||||
[Github issues]: https://github.com/VLSIDA/OpenRAM/issues
|
||||
[Github pull request]: https://github.com/VLSIDA/OpenRAM/pulls
|
||||
[Github project]: https://github.com/VLSIDA/OpenRAM
|
||||
[Github project]: https://github.com/VLSIDA/OpenRAM
|
||||
|
||||
[documentation]: https://docs.google.com/presentation/d/10InGB33N51I6oBHnqpU7_w9DXlx-qe9zdrlco2Yc5co/edit?usp=sharing
|
||||
[documentation]: docs/source/index.md
|
||||
[dev-group]: mailto:openram-dev-group@ucsc.edu
|
||||
[user-group]: mailto:openram-user-group@ucsc.edu
|
||||
[dev-group-subscribe]: mailto:openram-dev-group+subscribe@ucsc.edu
|
||||
|
|
|
|||
|
|
@ -0,0 +1,91 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
|
||||
|
||||
# Attempt to add the source code to the PYTHONPATH here before running globals.init_openram()
|
||||
try:
|
||||
OPENRAM_HOME = os.path.abspath(os.environ.get("OPENRAM_HOME"))
|
||||
except:
|
||||
OPENRAM_HOME = os.path.dirname(os.path.abspath(__file__)) + "/compiler"
|
||||
if not os.path.isdir(OPENRAM_HOME):
|
||||
assert False
|
||||
# Make sure that OPENRAM_HOME is an environment variable just in case
|
||||
if "OPENRAM_HOME" not in os.environ.keys():
|
||||
os.environ["OPENRAM_HOME"] = OPENRAM_HOME
|
||||
# Prepend $OPENRAM_HOME to __path__ so that openram will use those modules
|
||||
__path__.insert(0, OPENRAM_HOME)
|
||||
|
||||
|
||||
# Find the conda installer script
|
||||
if os.path.exists(OPENRAM_HOME + "/install_conda.sh"):
|
||||
CONDA_INSTALLER = OPENRAM_HOME + "/install_conda.sh"
|
||||
CONDA_HOME = OPENRAM_HOME + "/miniconda"
|
||||
elif os.path.exists(OPENRAM_HOME + "/../install_conda.sh"):
|
||||
CONDA_INSTALLER = OPENRAM_HOME + "/../install_conda.sh"
|
||||
CONDA_HOME = os.path.abspath(OPENRAM_HOME + "/../miniconda")
|
||||
# Override CONDA_HOME if it's set as an environment variable
|
||||
if "CONDA_HOME" in os.environ.keys():
|
||||
CONDA_HOME = os.environ["CONDA_HOME"]
|
||||
# Add CONDA_HOME to environment variables just in case
|
||||
try:
|
||||
os.environ["CONDA_HOME"] = CONDA_HOME
|
||||
except:
|
||||
from openram import debug
|
||||
debug.warning("Couldn't find conda setup directory.")
|
||||
|
||||
|
||||
# Import everything in globals.py
|
||||
from .globals import *
|
||||
# Import classes in the "openram" namespace
|
||||
from .sram_config import *
|
||||
from .sram import *
|
||||
from .rom_config import *
|
||||
from .rom import *
|
||||
|
||||
|
||||
# Add a meta path finder for custom modules
|
||||
from importlib.abc import MetaPathFinder
|
||||
class custom_module_finder(MetaPathFinder):
|
||||
"""
|
||||
This class is a 'hook' in Python's import system. If it encounters a module
|
||||
that can be customized, it checks if there is a custom module specified in
|
||||
the configuration file. If there is a custom module, it is imported instead
|
||||
of the default one.
|
||||
"""
|
||||
def find_spec(self, fullname, path, target=None):
|
||||
# Get package and module names
|
||||
package_name = fullname.split(".")[0]
|
||||
module_name = fullname.split(".")[-1]
|
||||
# Skip if the package is not openram
|
||||
if package_name != "openram":
|
||||
return None
|
||||
# Search for the module name in customizable modules
|
||||
from openram import OPTS
|
||||
for k, v in OPTS.__dict__.items():
|
||||
if module_name == v:
|
||||
break
|
||||
else:
|
||||
return None
|
||||
# Search for the custom module
|
||||
import sys
|
||||
# Try to find the module in sys.path
|
||||
for path in sys.path:
|
||||
# Skip this path if not directory
|
||||
if not os.path.isdir(path):
|
||||
continue
|
||||
for file in os.listdir(path):
|
||||
# If there is a script matching the custom module name,
|
||||
# import it with the default module name
|
||||
if file == (module_name + ".py"):
|
||||
from importlib.util import spec_from_file_location
|
||||
return spec_from_file_location(module_name, "{0}/{1}.py".format(path, module_name))
|
||||
return None
|
||||
# Python calls meta path finders and asks them to handle the module import if
|
||||
# they can
|
||||
sys.meta_path.insert(0, custom_module_finder())
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
"""
|
||||
Common functions for top-level scripts
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
|
||||
def make_openram_package():
|
||||
""" Make sure that OpenRAM can be used as a Python package. """
|
||||
|
||||
import importlib.util
|
||||
|
||||
# Find the package loader from python/site-packages
|
||||
openram_loader = importlib.util.find_spec("openram")
|
||||
|
||||
# If openram library isn't found as a python package, import it from
|
||||
# the $OPENRAM_HOME path.
|
||||
if openram_loader is None:
|
||||
OPENRAM_HOME = os.getenv("OPENRAM_HOME")
|
||||
# Import using spec since the directory can be named something other
|
||||
# than "openram".
|
||||
spec = importlib.util.spec_from_file_location("openram", "{}/../__init__.py".format(OPENRAM_HOME))
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
sys.modules["openram"] = module
|
||||
spec.loader.exec_module(module)
|
||||
|
|
@ -95,7 +95,7 @@ model: $(STAMPS)
|
|||
$(eval bname=$(basename $(notdir $@)))
|
||||
$(eval config_path=$(CONFIG_DIR)/$(addsuffix .py, $(notdir $(basename $@))))
|
||||
mkdir -p $(SIM_DIR)/$(bname)
|
||||
-python3 $(OPENRAM_HOME)/openram.py $(OPTS) -p $(SIM_DIR)/$(bname) -o $(bname) -t $(TECH) $(config_path) 2>&1 > /dev/null
|
||||
-python3 $(OPENRAM_HOME)/../sram_compiler.py $(OPTS) -p $(SIM_DIR)/$(bname) -o $(bname) -t $(TECH) $(config_path) 2>&1 > /dev/null
|
||||
touch $@
|
||||
|
||||
clean_model:
|
||||
|
|
|
|||
|
|
@ -1,3 +1,8 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .channel_route import *
|
||||
from .contact import *
|
||||
from .delay_data import *
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import collections
|
||||
import debug
|
||||
from tech import drc
|
||||
from openram import debug
|
||||
from openram.tech import drc
|
||||
from .vector import vector
|
||||
from .design import design
|
||||
|
||||
|
|
@ -405,4 +405,3 @@ class channel_route(design):
|
|||
to_layer=self.horizontal_layer,
|
||||
offset=pin_pos)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from openram import debug
|
||||
from openram.tech import drc, layer, preferred_directions
|
||||
from openram.tech import layer as tech_layers
|
||||
from .hierarchy_design import hierarchy_design
|
||||
from .vector import vector
|
||||
from tech import drc, layer, preferred_directions
|
||||
from tech import layer as tech_layers
|
||||
|
||||
|
||||
class contact(hierarchy_design):
|
||||
|
|
|
|||
|
|
@ -1,12 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
|
||||
class delay_data():
|
||||
"""
|
||||
This is the delay class to represent the delay information
|
||||
|
|
@ -38,7 +37,3 @@ class delay_data():
|
|||
assert isinstance(other, delay_data)
|
||||
return delay_data(other.delay + self.delay,
|
||||
self.slew)
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from tech import GDS, layer
|
||||
from tech import preferred_directions
|
||||
from tech import cell_properties as props
|
||||
from globals import OPTS
|
||||
from openram import debug
|
||||
from openram.tech import GDS, layer
|
||||
from openram.tech import preferred_directions
|
||||
from openram.tech import cell_properties as props
|
||||
from openram import OPTS
|
||||
from . import utils
|
||||
from .hierarchy_design import hierarchy_design
|
||||
|
||||
|
|
@ -67,7 +67,7 @@ class design(hierarchy_design):
|
|||
self.setup_multiport_constants()
|
||||
|
||||
try:
|
||||
from tech import power_grid
|
||||
from openram.tech import power_grid
|
||||
self.supply_stack = power_grid
|
||||
except ImportError:
|
||||
# if no power_grid is specified by tech we use sensible defaults
|
||||
|
|
@ -78,7 +78,7 @@ class design(hierarchy_design):
|
|||
for pin_name in self.pins:
|
||||
pins = self.get_pins(pin_name)
|
||||
for pin in pins:
|
||||
print(pin_name, pin)
|
||||
debug.info(0, "{0} {1}".format(pin_name, pin))
|
||||
|
||||
def setup_multiport_constants(self):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -1,4 +1,8 @@
|
|||
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
class drc_error(Exception):
|
||||
"""Exception raised for DRC errors.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
@ -8,14 +8,14 @@
|
|||
"""
|
||||
This provides a set of useful generic types for the gdsMill interface.
|
||||
"""
|
||||
import debug
|
||||
from .vector import vector
|
||||
import tech
|
||||
import math
|
||||
import copy
|
||||
import numpy as np
|
||||
from globals import OPTS
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
from .utils import round_to_grid
|
||||
from .vector import vector
|
||||
|
||||
|
||||
class geometry:
|
||||
|
|
@ -249,7 +249,6 @@ class instance(geometry):
|
|||
""" Return an absolute pin that is offset and transformed based on
|
||||
this instance location. Index will return one of several pins."""
|
||||
|
||||
import copy
|
||||
if index == -1:
|
||||
pin = copy.deepcopy(self.mod.get_pin(name))
|
||||
pin.transform(self.offset, self.mirror, self.rotate)
|
||||
|
|
@ -267,7 +266,6 @@ class instance(geometry):
|
|||
""" Return an absolute pin that is offset and transformed based on
|
||||
this instance location. """
|
||||
|
||||
import copy
|
||||
pin = copy.deepcopy(self.mod.get_pins(name))
|
||||
|
||||
new_pins = []
|
||||
|
|
@ -359,7 +357,7 @@ class instance(geometry):
|
|||
for offset in range(len(normalized_br_offsets)):
|
||||
for port in range(len(br_names)):
|
||||
cell_br_meta.append([br_names[offset], row, col, port])
|
||||
|
||||
|
||||
if normalized_storage_nets == []:
|
||||
debug.error("normalized storage nets should not be empty! Check if the GDS labels Q and Q_bar are correctly set on M1 of the cell",1)
|
||||
Q_x = normalized_storage_nets[0][0]
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
from .hierarchy_layout import layout
|
||||
from .hierarchy_spice import spice
|
||||
import debug
|
||||
import os
|
||||
from globals import OPTS
|
||||
|
||||
|
||||
class hierarchy_design(spice, layout):
|
||||
|
|
@ -49,7 +49,7 @@ class hierarchy_design(spice, layout):
|
|||
|
||||
def DRC_LVS(self, final_verification=False, force_check=False):
|
||||
"""Checks both DRC and LVS for a module"""
|
||||
import verify
|
||||
from openram import verify
|
||||
|
||||
# No layout to check
|
||||
if OPTS.netlist_only:
|
||||
|
|
@ -82,7 +82,7 @@ class hierarchy_design(spice, layout):
|
|||
|
||||
def DRC(self, final_verification=False):
|
||||
"""Checks DRC for a module"""
|
||||
import verify
|
||||
from openram import verify
|
||||
|
||||
# Unit tests will check themselves.
|
||||
# Do not run if disabled in options.
|
||||
|
|
@ -102,7 +102,7 @@ class hierarchy_design(spice, layout):
|
|||
|
||||
def LVS(self, final_verification=False):
|
||||
"""Checks LVS for a module"""
|
||||
import verify
|
||||
from openram import verify
|
||||
|
||||
# Unit tests will check themselves.
|
||||
# Do not run if disabled in options.
|
||||
|
|
|
|||
|
|
@ -1,32 +1,32 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
from math import sqrt
|
||||
import debug
|
||||
from gdsMill import gdsMill
|
||||
import tech
|
||||
from tech import drc, GDS
|
||||
from tech import layer as tech_layer
|
||||
from tech import layer_indices as tech_layer_indices
|
||||
from tech import preferred_directions
|
||||
from tech import layer_stacks as tech_layer_stacks
|
||||
from tech import active_stack as tech_active_stack
|
||||
from sram_factory import factory
|
||||
from globals import OPTS
|
||||
from openram import debug
|
||||
from openram.gdsMill import gdsMill
|
||||
from openram import tech
|
||||
from openram.tech import drc, GDS
|
||||
from openram.tech import layer as tech_layer
|
||||
from openram.tech import layer_indices as tech_layer_indices
|
||||
from openram.tech import preferred_directions
|
||||
from openram.tech import layer_stacks as tech_layer_stacks
|
||||
from openram.tech import active_stack as tech_active_stack
|
||||
from openram.sram_factory import factory
|
||||
from openram import OPTS
|
||||
from .vector import vector
|
||||
from .pin_layout import pin_layout
|
||||
from .utils import round_to_grid
|
||||
from . import geometry
|
||||
|
||||
try:
|
||||
from tech import special_purposes
|
||||
from openram.tech import special_purposes
|
||||
except ImportError:
|
||||
special_purposes = {}
|
||||
|
||||
|
|
@ -141,30 +141,28 @@ class layout():
|
|||
layout.active_space)
|
||||
|
||||
# These are for debugging previous manual rules
|
||||
if False:
|
||||
print("poly_width", layout.poly_width)
|
||||
print("poly_space", layout.poly_space)
|
||||
print("m1_width", layout.m1_width)
|
||||
print("m1_space", layout.m1_space)
|
||||
print("m2_width", layout.m2_width)
|
||||
print("m2_space", layout.m2_space)
|
||||
print("m3_width", layout.m3_width)
|
||||
print("m3_space", layout.m3_space)
|
||||
print("m4_width", layout.m4_width)
|
||||
print("m4_space", layout.m4_space)
|
||||
print("active_width", layout.active_width)
|
||||
print("active_space", layout.active_space)
|
||||
print("contact_width", layout.contact_width)
|
||||
print("poly_to_active", layout.poly_to_active)
|
||||
print("poly_extend_active", layout.poly_extend_active)
|
||||
print("poly_to_contact", layout.poly_to_contact)
|
||||
print("active_contact_to_gate", layout.active_contact_to_gate)
|
||||
print("poly_contact_to_gate", layout.poly_contact_to_gate)
|
||||
print("well_enclose_active", layout.well_enclose_active)
|
||||
print("implant_enclose_active", layout.implant_enclose_active)
|
||||
print("implant_space", layout.implant_space)
|
||||
import sys
|
||||
sys.exit(1)
|
||||
level=99
|
||||
debug.info(level, "poly_width".format(layout.poly_width))
|
||||
debug.info(level, "poly_space".format(layout.poly_space))
|
||||
debug.info(level, "m1_width".format(layout.m1_width))
|
||||
debug.info(level, "m1_space".format(layout.m1_space))
|
||||
debug.info(level, "m2_width".format(layout.m2_width))
|
||||
debug.info(level, "m2_space".format(layout.m2_space))
|
||||
debug.info(level, "m3_width".format(layout.m3_width))
|
||||
debug.info(level, "m3_space".format(layout.m3_space))
|
||||
debug.info(level, "m4_width".format(layout.m4_width))
|
||||
debug.info(level, "m4_space".format(layout.m4_space))
|
||||
debug.info(level, "active_width".format(layout.active_width))
|
||||
debug.info(level, "active_space".format(layout.active_space))
|
||||
debug.info(level, "contact_width".format(layout.contact_width))
|
||||
debug.info(level, "poly_to_active".format(layout.poly_to_active))
|
||||
debug.info(level, "poly_extend_active".format(layout.poly_extend_active))
|
||||
debug.info(level, "poly_to_contact".format(layout.poly_to_contact))
|
||||
debug.info(level, "active_contact_to_gate".format(layout.active_contact_to_gate))
|
||||
debug.info(level, "poly_contact_to_gate".format(layout.poly_contact_to_gate))
|
||||
debug.info(level, "well_enclose_active".format(layout.well_enclose_active))
|
||||
debug.info(level, "implant_enclose_active".format(layout.implant_enclose_active))
|
||||
debug.info(level, "implant_space".format(layout.implant_space))
|
||||
|
||||
@classmethod
|
||||
def setup_layer_constants(layout):
|
||||
|
|
@ -173,7 +171,7 @@ class layout():
|
|||
in many places in the compiler.
|
||||
"""
|
||||
try:
|
||||
from tech import power_grid
|
||||
from openram.tech import power_grid
|
||||
layout.pwr_grid_layers = [power_grid[0], power_grid[2]]
|
||||
except ImportError:
|
||||
layout.pwr_grid_layers = ["m3", "m4"]
|
||||
|
|
@ -202,21 +200,19 @@ class layout():
|
|||
"{}_nonpref_pitch".format(layer_id),
|
||||
layout.compute_pitch(layer_id, False))
|
||||
|
||||
if False:
|
||||
for name in tech_layer_indices:
|
||||
if name == "active":
|
||||
continue
|
||||
try:
|
||||
print("{0} width {1} space {2}".format(name,
|
||||
getattr(layout, "{}_width".format(name)),
|
||||
getattr(layout, "{}_space".format(name))))
|
||||
level=99
|
||||
for name in tech_layer_indices:
|
||||
if name == "active":
|
||||
continue
|
||||
try:
|
||||
debug.info(level, "{0} width {1} space {2}".format(name,
|
||||
getattr(layout, "{}_width".format(name)),
|
||||
getattr(layout, "{}_space".format(name))))
|
||||
|
||||
print("pitch {0} nonpref {1}".format(getattr(layout, "{}_pitch".format(name)),
|
||||
getattr(layout, "{}_nonpref_pitch".format(name))))
|
||||
except AttributeError:
|
||||
pass
|
||||
import sys
|
||||
sys.exit(1)
|
||||
debug.info(level, "pitch {0} nonpref {1}".format(getattr(layout, "{}_pitch".format(name)),
|
||||
getattr(layout, "{}_nonpref_pitch".format(name))))
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def compute_pitch(layer, preferred=True):
|
||||
|
|
@ -635,10 +631,11 @@ class layout():
|
|||
"""
|
||||
return self.pins
|
||||
|
||||
def copy_layout_pin(self, instance, pin_name, new_name=""):
|
||||
def copy_layout_pin(self, instance, pin_name, new_name="", relative_offset=vector(0, 0)):
|
||||
"""
|
||||
Create a copied version of the layout pin at the current level.
|
||||
You can optionally rename the pin to a new name.
|
||||
You can optionally add an offset vector by which to move the pin.
|
||||
"""
|
||||
pins = instance.get_pins(pin_name)
|
||||
|
||||
|
|
@ -650,7 +647,7 @@ class layout():
|
|||
new_name = pin_name
|
||||
self.add_layout_pin(new_name,
|
||||
pin.layer,
|
||||
pin.ll(),
|
||||
pin.ll() + relative_offset,
|
||||
pin.width(),
|
||||
pin.height())
|
||||
|
||||
|
|
@ -699,13 +696,15 @@ class layout():
|
|||
start=left_pos,
|
||||
end=right_pos)
|
||||
|
||||
def connect_row_pins(self, layer, pins, name=None, full=False):
|
||||
def connect_row_pins(self, layer, pins, name=None, full=False, round=False):
|
||||
"""
|
||||
Connects left/right rows that are aligned.
|
||||
"""
|
||||
bins = {}
|
||||
for pin in pins:
|
||||
y = pin.cy()
|
||||
if round:
|
||||
y = round_to_grid(y)
|
||||
try:
|
||||
bins[y].append(pin)
|
||||
except KeyError:
|
||||
|
|
@ -788,13 +787,15 @@ class layout():
|
|||
end=bot_pos)
|
||||
|
||||
|
||||
def connect_col_pins(self, layer, pins, name=None, full=False):
|
||||
def connect_col_pins(self, layer, pins, name=None, full=False, round=False, directions="pref"):
|
||||
"""
|
||||
Connects top/bot columns that are aligned.
|
||||
"""
|
||||
bins = {}
|
||||
for pin in pins:
|
||||
x = pin.cx()
|
||||
if round:
|
||||
x = round_to_grid(x)
|
||||
try:
|
||||
bins[x].append(pin)
|
||||
except KeyError:
|
||||
|
|
@ -820,7 +821,8 @@ class layout():
|
|||
self.add_via_stack_center(from_layer=pin.layer,
|
||||
to_layer=layer,
|
||||
offset=pin.center(),
|
||||
min_area=True)
|
||||
min_area=True,
|
||||
directions=directions)
|
||||
|
||||
if name:
|
||||
self.add_layout_pin_segment_center(text=name,
|
||||
|
|
@ -1257,7 +1259,6 @@ class layout():
|
|||
|
||||
def add_via(self, layers, offset, size=[1, 1], directions=None, implant_type=None, well_type=None):
|
||||
""" Add a three layer via structure. """
|
||||
from sram_factory import factory
|
||||
via = factory.create(module_type="contact",
|
||||
layer_stack=layers,
|
||||
dimensions=size,
|
||||
|
|
@ -1276,7 +1277,6 @@ class layout():
|
|||
Add a three layer via structure by the center coordinate
|
||||
accounting for mirroring and rotation.
|
||||
"""
|
||||
from sram_factory import factory
|
||||
via = factory.create(module_type="contact",
|
||||
layer_stack=layers,
|
||||
dimensions=size,
|
||||
|
|
@ -1317,7 +1317,7 @@ class layout():
|
|||
return None
|
||||
|
||||
intermediate_layers = self.get_metal_layers(from_layer, to_layer)
|
||||
|
||||
|
||||
via = None
|
||||
cur_layer = from_layer
|
||||
while cur_layer != to_layer:
|
||||
|
|
@ -1383,10 +1383,10 @@ class layout():
|
|||
|
||||
def add_ptx(self, offset, mirror="R0", rotate=0, width=1, mults=1, tx_type="nmos"):
|
||||
"""Adds a ptx module to the design."""
|
||||
import ptx
|
||||
mos = ptx.ptx(width=width,
|
||||
mults=mults,
|
||||
tx_type=tx_type)
|
||||
from openram.modules import ptx
|
||||
mos = ptx(width=width,
|
||||
mults=mults,
|
||||
tx_type=tx_type)
|
||||
inst = self.add_inst(name=mos.name,
|
||||
mod=mos,
|
||||
offset=offset,
|
||||
|
|
@ -1897,7 +1897,7 @@ class layout():
|
|||
elif add_vias:
|
||||
self.copy_power_pin(pin, new_name=new_name)
|
||||
|
||||
def add_io_pin(self, instance, pin_name, new_name, start_layer=None):
|
||||
def add_io_pin(self, instance, pin_name, new_name, start_layer=None, directions=None):
|
||||
"""
|
||||
Add a signle input or output pin up to metal 3.
|
||||
"""
|
||||
|
|
@ -1907,7 +1907,7 @@ class layout():
|
|||
start_layer = pin.layer
|
||||
|
||||
# Just use the power pin function for now to save code
|
||||
self.add_power_pin(new_name, pin.center(), start_layer=start_layer)
|
||||
self.add_power_pin(new_name, pin.center(), start_layer=start_layer, directions=directions)
|
||||
|
||||
def add_power_pin(self, name, loc, directions=None, start_layer="m1"):
|
||||
# Hack for min area
|
||||
|
|
@ -2180,7 +2180,6 @@ class layout():
|
|||
|
||||
# Find the number of vias for this pitch
|
||||
supply_vias = 1
|
||||
from sram_factory import factory
|
||||
while True:
|
||||
c = factory.create(module_type="contact",
|
||||
layer_stack=self.m1_stack,
|
||||
|
|
@ -2293,7 +2292,6 @@ class layout():
|
|||
|
||||
# Find the number of vias for this pitch
|
||||
self.supply_vias = 1
|
||||
from sram_factory import factory
|
||||
while True:
|
||||
c = factory.create(module_type="contact",
|
||||
layer_stack=self.m1_stack,
|
||||
|
|
|
|||
|
|
@ -1,17 +1,18 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
import re
|
||||
import os
|
||||
import re
|
||||
import math
|
||||
import tech
|
||||
from globals import OPTS
|
||||
import textwrap as tr
|
||||
from pprint import pformat
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
from .delay_data import delay_data
|
||||
from .wire_spice_model import wire_spice_model
|
||||
from .power_data import power_data
|
||||
|
|
@ -37,7 +38,7 @@ class spice():
|
|||
# If we have a separate lvs directory, then all the lvs files
|
||||
# should be in there (all or nothing!)
|
||||
try:
|
||||
from tech import lvs_name
|
||||
from openram.tech import lvs_name
|
||||
lvs_dir = OPTS.openram_tech + lvs_name + "_lvs_lib/"
|
||||
except ImportError:
|
||||
lvs_dir = OPTS.openram_tech + "lvs_lib/"
|
||||
|
|
@ -338,19 +339,21 @@ class spice():
|
|||
return
|
||||
|
||||
# write out the first spice line (the subcircuit)
|
||||
sp.write("\n.SUBCKT {0} {1}\n".format(self.cell_name,
|
||||
" ".join(self.pins)))
|
||||
wrapped_pins = "\n+ ".join(tr.wrap(" ".join(self.pins)))
|
||||
sp.write("\n.SUBCKT {0}\n+ {1}\n".format(self.cell_name,
|
||||
wrapped_pins))
|
||||
|
||||
# write a PININFO line
|
||||
pin_info = "*.PININFO"
|
||||
for pin in self.pins:
|
||||
if self.pin_type[pin] == "INPUT":
|
||||
pin_info += " {0}:I".format(pin)
|
||||
elif self.pin_type[pin] == "OUTPUT":
|
||||
pin_info += " {0}:O".format(pin)
|
||||
else:
|
||||
pin_info += " {0}:B".format(pin)
|
||||
sp.write(pin_info + "\n")
|
||||
if False:
|
||||
pin_info = "*.PININFO"
|
||||
for pin in self.pins:
|
||||
if self.pin_type[pin] == "INPUT":
|
||||
pin_info += " {0}:I".format(pin)
|
||||
elif self.pin_type[pin] == "OUTPUT":
|
||||
pin_info += " {0}:O".format(pin)
|
||||
else:
|
||||
pin_info += " {0}:B".format(pin)
|
||||
sp.write(pin_info + "\n")
|
||||
|
||||
# Also write pins as comments
|
||||
for pin in self.pins:
|
||||
|
|
@ -391,9 +394,11 @@ class spice():
|
|||
" ".join(self.conns[i])))
|
||||
sp.write("\n")
|
||||
else:
|
||||
sp.write("X{0} {1} {2}\n".format(self.insts[i].name,
|
||||
" ".join(self.conns[i]),
|
||||
self.insts[i].mod.cell_name))
|
||||
wrapped_connections = "\n+ ".join(tr.wrap(" ".join(self.conns[i])))
|
||||
|
||||
sp.write("X{0}\n+ {1}\n+ {2}\n".format(self.insts[i].name,
|
||||
wrapped_connections,
|
||||
self.insts[i].mod.cell_name))
|
||||
|
||||
sp.write(".ENDS {0}\n".format(self.cell_name))
|
||||
|
||||
|
|
@ -409,6 +414,7 @@ class spice():
|
|||
|
||||
sp.write("\n")
|
||||
|
||||
|
||||
def sp_write(self, spname, lvs=False, trim=False):
|
||||
"""Writes the spice to files"""
|
||||
debug.info(3, "Writing to {0}".format(spname))
|
||||
|
|
|
|||
|
|
@ -1,17 +1,17 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from base import vector
|
||||
from base import pin_layout
|
||||
from tech import layer_names
|
||||
import os
|
||||
import shutil
|
||||
from globals import OPTS
|
||||
from openram import debug
|
||||
from openram.base import vector
|
||||
from openram.base import pin_layout
|
||||
from openram.tech import layer_names
|
||||
from openram import OPTS
|
||||
|
||||
|
||||
class lef:
|
||||
|
|
@ -64,7 +64,7 @@ class lef:
|
|||
f.write('puts "Finished writing LEF cell {}"\n'.format(self.name))
|
||||
f.close()
|
||||
os.system("chmod u+x {}".format(run_file))
|
||||
from run_script import run_script
|
||||
from openram.verify.run_script import run_script
|
||||
(outfile, errfile, resultsfile) = run_script(self.name, "lef")
|
||||
|
||||
def lef_write(self, lef_name):
|
||||
|
|
@ -75,7 +75,7 @@ class lef:
|
|||
# return
|
||||
|
||||
# To maintain the indent level easily
|
||||
self.indent = ""
|
||||
self.indent = ""
|
||||
|
||||
if OPTS.detailed_lef:
|
||||
debug.info(3, "Writing detailed LEF to {0}".format(lef_name))
|
||||
|
|
@ -88,7 +88,7 @@ class lef:
|
|||
|
||||
for pin_name in self.pins:
|
||||
self.lef_write_pin(pin_name)
|
||||
|
||||
|
||||
self.lef_write_obstructions(OPTS.detailed_lef)
|
||||
self.lef_write_footer()
|
||||
self.lef.close()
|
||||
|
|
@ -220,4 +220,3 @@ class lef:
|
|||
round(item[1],
|
||||
self.round_grid)))
|
||||
self.lef.write(" ;\n")
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,13 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from tech import parameter
|
||||
from openram import debug
|
||||
from openram.tech import parameter
|
||||
|
||||
|
||||
class logical_effort():
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from tech import GDS, drc
|
||||
from .vector import vector
|
||||
from tech import layer, layer_indices
|
||||
import math
|
||||
from openram import debug
|
||||
from openram.tech import GDS, drc
|
||||
from openram.tech import layer, layer_indices
|
||||
from .vector import vector
|
||||
|
||||
|
||||
class pin_layout:
|
||||
|
|
@ -45,11 +45,11 @@ class pin_layout:
|
|||
if self.same_lpp(layer_name_pp, lpp):
|
||||
self._layer = layer_name
|
||||
break
|
||||
|
||||
|
||||
else:
|
||||
try:
|
||||
from tech import layer_override
|
||||
from tech import layer_override_name
|
||||
from openram.tech import layer_override
|
||||
from openram.tech import layer_override_name
|
||||
if layer_override[name]:
|
||||
self.lpp = layer_override[name]
|
||||
self.layer = "pwellp"
|
||||
|
|
@ -57,7 +57,7 @@ class pin_layout:
|
|||
return
|
||||
except:
|
||||
debug.error("Layer {} is not a valid routing layer in the tech file.".format(layer_name_pp), -1)
|
||||
|
||||
|
||||
self.lpp = layer[self.layer]
|
||||
self._recompute_hash()
|
||||
|
||||
|
|
@ -406,15 +406,15 @@ class pin_layout:
|
|||
# Try to use a global pin purpose if it exists,
|
||||
# otherwise, use the regular purpose
|
||||
try:
|
||||
from tech import pin_purpose as global_pin_purpose
|
||||
from openram.tech import pin_purpose as global_pin_purpose
|
||||
pin_purpose = global_pin_purpose
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
try:
|
||||
from tech import label_purpose
|
||||
from openram.tech import label_purpose
|
||||
try:
|
||||
from tech import layer_override_purpose
|
||||
from openram.tech import layer_override_purpose
|
||||
if pin_layer_num in layer_override_purpose:
|
||||
layer_num = layer_override_purpose[pin_layer_num][0]
|
||||
label_purpose = layer_override_purpose[pin_layer_num][1]
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
|
|||
|
|
@ -1,17 +1,18 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from itertools import tee
|
||||
from openram import debug
|
||||
from openram.sram_factory import factory
|
||||
from openram.tech import drc
|
||||
from .design import design
|
||||
from .vector import vector
|
||||
from .vector3d import vector3d
|
||||
from tech import drc
|
||||
from itertools import tee
|
||||
from sram_factory import factory
|
||||
|
||||
|
||||
class route(design):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -1,6 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
import copy
|
||||
from collections import defaultdict
|
||||
import debug
|
||||
from openram import debug
|
||||
|
||||
|
||||
class timing_graph():
|
||||
|
|
@ -119,7 +124,7 @@ class timing_graph():
|
|||
# If at the last output, include the final output load
|
||||
if i == len(path) - 2:
|
||||
cout += load
|
||||
|
||||
|
||||
if params["model_name"] == "cacti":
|
||||
delays.append(path_edge_mod.cacti_delay(corner, cur_slew, cout, params))
|
||||
cur_slew = delays[-1].slew
|
||||
|
|
@ -130,14 +135,14 @@ class timing_graph():
|
|||
return_value=1)
|
||||
|
||||
return delays
|
||||
|
||||
|
||||
def get_edge_mods(self, path):
|
||||
"""Return all edge mods associated with path"""
|
||||
|
||||
|
||||
if len(path) == 0:
|
||||
return []
|
||||
|
||||
return [self.edge_mods[(path[i], path[i+1])] for i in range(len(path)-1)]
|
||||
|
||||
return [self.edge_mods[(path[i], path[i+1])] for i in range(len(path)-1)]
|
||||
|
||||
def __str__(self):
|
||||
""" override print function output """
|
||||
|
|
@ -153,4 +158,3 @@ class timing_graph():
|
|||
""" override print function output """
|
||||
|
||||
return str(self)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,24 +1,22 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
||||
#
|
||||
import os
|
||||
import math
|
||||
|
||||
from gdsMill import gdsMill
|
||||
import tech
|
||||
import globals
|
||||
import debug
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram.gdsMill import gdsMill
|
||||
from openram import OPTS
|
||||
from .vector import vector
|
||||
from .pin_layout import pin_layout
|
||||
try:
|
||||
from tech import special_purposes
|
||||
from openram.tech import special_purposes
|
||||
except ImportError:
|
||||
special_purposes = {}
|
||||
OPTS = globals.OPTS
|
||||
|
||||
|
||||
def ceil(decimal):
|
||||
|
|
@ -159,11 +157,11 @@ def get_gds_pins(pin_names, name, gds_filename, units):
|
|||
# may have must-connect pins
|
||||
if isinstance(lpp[1], list):
|
||||
try:
|
||||
from tech import layer_override
|
||||
from openram.tech import layer_override
|
||||
if layer_override[pin_name]:
|
||||
lpp = layer_override[pin_name.textString]
|
||||
except:
|
||||
pass
|
||||
pass
|
||||
lpp = (lpp[0], None)
|
||||
cell[str(pin_name)].append(pin_layout(pin_name, rect, lpp))
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,12 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
import math
|
||||
import tech
|
||||
from openram import tech
|
||||
|
||||
|
||||
class vector():
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import math
|
||||
from tech import spice
|
||||
from openram.tech import spice
|
||||
|
||||
|
||||
class verilog:
|
||||
|
|
@ -24,7 +24,7 @@ class verilog:
|
|||
self.vf.write("// OpenRAM SRAM model\n")
|
||||
self.vf.write("// Words: {0}\n".format(self.num_words))
|
||||
self.vf.write("// Word size: {0}\n".format(self.word_size))
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write("// Write size: {0}\n\n".format(self.write_size))
|
||||
else:
|
||||
self.vf.write("\n")
|
||||
|
|
@ -38,7 +38,10 @@ class verilog:
|
|||
except KeyError:
|
||||
self.gnd_name = "gnd"
|
||||
|
||||
self.vf.write("module {0}(\n".format(self.name))
|
||||
if self.num_banks > 1:
|
||||
self.vf.write("module {0}(\n".format(self.name))
|
||||
else:
|
||||
self.vf.write("module {0}(\n".format(self.name))
|
||||
self.vf.write("`ifdef USE_POWER_PINS\n")
|
||||
self.vf.write(" {},\n".format(self.vdd_name))
|
||||
self.vf.write(" {},\n".format(self.gnd_name))
|
||||
|
|
@ -53,14 +56,14 @@ class verilog:
|
|||
self.vf.write("// Port {0}: W\n".format(port))
|
||||
if port in self.readwrite_ports:
|
||||
self.vf.write(" clk{0},csb{0},web{0},".format(port))
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write("wmask{},".format(port))
|
||||
if self.num_spare_cols > 0:
|
||||
self.vf.write("spare_wen{0},".format(port))
|
||||
self.vf.write("addr{0},din{0},dout{0}".format(port))
|
||||
elif port in self.write_ports:
|
||||
self.vf.write(" clk{0},csb{0},".format(port))
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write("wmask{},".format(port))
|
||||
if self.num_spare_cols > 0:
|
||||
self.vf.write("spare_wen{0},".format(port))
|
||||
|
|
@ -72,11 +75,11 @@ class verilog:
|
|||
self.vf.write(",\n")
|
||||
self.vf.write("\n );\n\n")
|
||||
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.num_wmasks = int(math.ceil(self.word_size / self.write_size))
|
||||
self.vf.write(" parameter NUM_WMASKS = {0} ;\n".format(self.num_wmasks))
|
||||
self.vf.write(" parameter DATA_WIDTH = {0} ;\n".format(self.word_size + self.num_spare_cols))
|
||||
self.vf.write(" parameter ADDR_WIDTH = {0} ;\n".format(self.addr_size))
|
||||
self.vf.write(" parameter ADDR_WIDTH = {0} ;\n".format(self.bank_addr_size))
|
||||
self.vf.write(" parameter RAM_DEPTH = 1 << ADDR_WIDTH;\n")
|
||||
self.vf.write(" // FIXME: This delay is arbitrary.\n")
|
||||
self.vf.write(" parameter DELAY = 3 ;\n")
|
||||
|
|
@ -125,7 +128,7 @@ class verilog:
|
|||
if port in self.readwrite_ports:
|
||||
self.vf.write(" reg web{0}_reg;\n".format(port))
|
||||
if port in self.write_ports:
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write(" reg [NUM_WMASKS-1:0] wmask{0}_reg;\n".format(port))
|
||||
if self.num_spare_cols > 1:
|
||||
self.vf.write(" reg [{1}:0] spare_wen{0}_reg;".format(port, self.num_spare_cols - 1))
|
||||
|
|
@ -149,7 +152,7 @@ class verilog:
|
|||
if port in self.readwrite_ports:
|
||||
self.vf.write(" web{0}_reg = web{0};\n".format(port))
|
||||
if port in self.write_ports:
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write(" wmask{0}_reg = wmask{0};\n".format(port))
|
||||
if self.num_spare_cols:
|
||||
self.vf.write(" spare_wen{0}_reg = spare_wen{0};\n".format(port))
|
||||
|
|
@ -169,13 +172,13 @@ class verilog:
|
|||
self.vf.write(" $display($time,\" Reading %m addr{0}=%b dout{0}=%b\",addr{0}_reg,mem[addr{0}_reg]);\n".format(port))
|
||||
if port in self.readwrite_ports:
|
||||
self.vf.write(" if ( !csb{0}_reg && !web{0}_reg && VERBOSE )\n".format(port))
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write(" $display($time,\" Writing %m addr{0}=%b din{0}=%b wmask{0}=%b\",addr{0}_reg,din{0}_reg,wmask{0}_reg);\n".format(port))
|
||||
else:
|
||||
self.vf.write(" $display($time,\" Writing %m addr{0}=%b din{0}=%b\",addr{0}_reg,din{0}_reg);\n".format(port))
|
||||
elif port in self.write_ports:
|
||||
self.vf.write(" if ( !csb{0}_reg && VERBOSE )\n".format(port))
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write(" $display($time,\" Writing %m addr{0}=%b din{0}=%b wmask{0}=%b\",addr{0}_reg,din{0}_reg,wmask{0}_reg);\n".format(port))
|
||||
else:
|
||||
self.vf.write(" $display($time,\" Writing %m addr{0}=%b din{0}=%b\",addr{0}_reg,din{0}_reg);\n".format(port))
|
||||
|
|
@ -193,7 +196,7 @@ class verilog:
|
|||
|
||||
self.vf.write(" input [ADDR_WIDTH-1:0] addr{0};\n".format(port))
|
||||
if port in self.write_ports:
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.vf.write(" input [NUM_WMASKS-1:0] wmask{0}; // write mask\n".format(port))
|
||||
if self.num_spare_cols == 1:
|
||||
self.vf.write(" input spare_wen{0}; // spare mask\n".format(port))
|
||||
|
|
@ -218,7 +221,7 @@ class verilog:
|
|||
else:
|
||||
self.vf.write(" if (!csb{0}_reg) begin\n".format(port))
|
||||
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
for mask in range(0, self.num_wmasks):
|
||||
lower = mask * self.write_size
|
||||
upper = lower + self.write_size - 1
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
from tech import drc
|
||||
from openram.tech import drc
|
||||
from openram.sram_factory import factory
|
||||
from .wire_path import wire_path
|
||||
from sram_factory import factory
|
||||
|
||||
|
||||
class wire(wire_path):
|
||||
|
|
@ -68,10 +68,10 @@ class wire(wire_path):
|
|||
This is contact direction independent pitch,
|
||||
i.e. we take the maximum contact dimension
|
||||
"""
|
||||
|
||||
|
||||
# This is here for the unit tests which may not have
|
||||
# initialized the static parts of the layout class yet.
|
||||
from base import layout
|
||||
from openram.base import layout
|
||||
layout("fake", "fake")
|
||||
|
||||
(layer1, via, layer2) = layer_stack
|
||||
|
|
|
|||
|
|
@ -1,15 +1,16 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
from .vector import vector
|
||||
from .utils import snap_to_grid
|
||||
from openram.tech import drc
|
||||
from openram.tech import layer as techlayer
|
||||
from .design import design
|
||||
from tech import drc
|
||||
from tech import layer as techlayer
|
||||
from .utils import snap_to_grid
|
||||
from .vector import vector
|
||||
|
||||
|
||||
def create_rectilinear_route(my_list):
|
||||
""" Add intermediate nodes if it isn't rectilinear. Also skip
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
@ -16,14 +16,14 @@ class wire_spice_model():
|
|||
self.wire_r = self.cal_wire_r(wire_length, wire_width) # r in each segment
|
||||
|
||||
def cal_wire_c(self, wire_length, wire_width):
|
||||
from tech import spice
|
||||
from openram.tech import spice
|
||||
# Convert the F/um^2 to fF/um^2 then multiple by width and length
|
||||
total_c = (spice["wire_unit_c"]*1e12) * wire_length * wire_width
|
||||
wire_c = total_c / self.lump_num
|
||||
return wire_c
|
||||
|
||||
def cal_wire_r(self, wire_length, wire_width):
|
||||
from tech import spice
|
||||
from openram.tech import spice
|
||||
total_r = spice["wire_unit_r"] * wire_length / wire_width
|
||||
wire_r = total_r / self.lump_num
|
||||
return wire_r
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
import debug
|
||||
from globals import OPTS, find_exe, get_tool
|
||||
from openram import debug
|
||||
from openram import OPTS, find_exe, get_tool
|
||||
from .lib import *
|
||||
from .delay import *
|
||||
from .elmore import *
|
||||
|
|
@ -56,4 +56,3 @@ if not OPTS.analytical_delay:
|
|||
else:
|
||||
debug.info(1, "Analytical model enabled.")
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,325 +1,325 @@
|
|||
#
|
||||
# Copyright (c) 2016-2019 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
import debug
|
||||
|
||||
import csv
|
||||
import math
|
||||
import numpy as np
|
||||
import os
|
||||
|
||||
process_transform = {'SS':0.0, 'TT': 0.5, 'FF':1.0}
|
||||
|
||||
def get_data_names(file_name, exclude_area=True):
|
||||
"""
|
||||
Returns just the data names in the first row of the CSV
|
||||
"""
|
||||
|
||||
with open(file_name, newline='') as csvfile:
|
||||
csv_reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
|
||||
row_iter = 0
|
||||
# reader is iterable not a list, probably a better way to do this
|
||||
for row in csv_reader:
|
||||
# Return names from first row
|
||||
names = row[0].split(',')
|
||||
break
|
||||
if exclude_area:
|
||||
try:
|
||||
area_ind = names.index('area')
|
||||
except ValueError:
|
||||
area_ind = -1
|
||||
|
||||
if area_ind != -1:
|
||||
names = names[:area_ind] + names[area_ind+1:]
|
||||
return names
|
||||
|
||||
def get_data(file_name):
|
||||
"""
|
||||
Returns data in CSV as lists of features
|
||||
"""
|
||||
|
||||
with open(file_name, newline='') as csvfile:
|
||||
csv_reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
|
||||
row_iter = 0
|
||||
removed_items = 1
|
||||
for row in csv_reader:
|
||||
row_iter += 1
|
||||
if row_iter == 1:
|
||||
feature_names = row[0].split(',')
|
||||
input_list = [[] for _ in range(len(feature_names)-removed_items)]
|
||||
try:
|
||||
# Save to remove area
|
||||
area_ind = feature_names.index('area')
|
||||
except ValueError:
|
||||
area_ind = -1
|
||||
|
||||
try:
|
||||
process_ind = feature_names.index('process')
|
||||
except:
|
||||
debug.error('Process not included as a feature.')
|
||||
continue
|
||||
|
||||
|
||||
|
||||
data = []
|
||||
split_str = row[0].split(',')
|
||||
for i in range(len(split_str)):
|
||||
if i == process_ind:
|
||||
data.append(process_transform[split_str[i]])
|
||||
elif i == area_ind:
|
||||
continue
|
||||
else:
|
||||
data.append(float(split_str[i]))
|
||||
|
||||
data[0] = math.log(data[0], 2)
|
||||
|
||||
for i in range(len(data)):
|
||||
input_list[i].append(data[i])
|
||||
|
||||
return input_list
|
||||
|
||||
def apply_samples_to_data(all_data, algo_samples):
|
||||
# Take samples from algorithm and match them to samples in data
|
||||
data_samples, unused_data = [], []
|
||||
sample_positions = set()
|
||||
for sample in algo_samples:
|
||||
sample_positions.add(find_sample_position_with_min_error(all_data, sample))
|
||||
|
||||
for i in range(len(all_data)):
|
||||
if i in sample_positions:
|
||||
data_samples.append(all_data[i])
|
||||
else:
|
||||
unused_data.append(all_data[i])
|
||||
|
||||
return data_samples, unused_data
|
||||
|
||||
def find_sample_position_with_min_error(data, sampled_vals):
|
||||
min_error = 0
|
||||
sample_pos = 0
|
||||
count = 0
|
||||
for data_slice in data:
|
||||
error = squared_error(data_slice, sampled_vals)
|
||||
if min_error == 0 or error < min_error:
|
||||
min_error = error
|
||||
sample_pos = count
|
||||
count += 1
|
||||
return sample_pos
|
||||
|
||||
def squared_error(list_a, list_b):
|
||||
error_sum = 0;
|
||||
for a,b in zip(list_a, list_b):
|
||||
error_sum+=(a-b)**2
|
||||
return error_sum
|
||||
|
||||
|
||||
def get_max_min_from_datasets(dir):
|
||||
if not os.path.isdir(dir):
|
||||
debug.warning("Input Directory not found:{}".format(dir))
|
||||
return [], [], []
|
||||
|
||||
# Assuming all files are CSV
|
||||
data_files = [f for f in os.listdir(dir) if os.path.isfile(os.path.join(dir, f))]
|
||||
maxs,mins,sums,total_count = [],[],[],0
|
||||
for file in data_files:
|
||||
data = get_data(os.path.join(dir, file))
|
||||
# Get max, min, sum, and count from every file
|
||||
data_max, data_min, data_sum, count = [],[],[], 0
|
||||
for feature_list in data:
|
||||
data_max.append(max(feature_list))
|
||||
data_min.append(min(feature_list))
|
||||
data_sum.append(sum(feature_list))
|
||||
count = len(feature_list)
|
||||
|
||||
# Aggregate the data
|
||||
if not maxs or not mins or not sums:
|
||||
maxs,mins,sums,total_count = data_max,data_min,data_sum,count
|
||||
else:
|
||||
for i in range(len(maxs)):
|
||||
maxs[i] = max(data_max[i], maxs[i])
|
||||
mins[i] = min(data_min[i], mins[i])
|
||||
sums[i] = data_sum[i]+sums[i]
|
||||
total_count+=count
|
||||
|
||||
avgs = [s/total_count for s in sums]
|
||||
return maxs,mins,avgs
|
||||
|
||||
def get_max_min_from_file(path):
|
||||
if not os.path.isfile(path):
|
||||
debug.warning("Input file not found: {}".format(path))
|
||||
return [], [], []
|
||||
|
||||
|
||||
data = get_data(path)
|
||||
# Get max, min, sum, and count from every file
|
||||
data_max, data_min, data_sum, count = [],[],[], 0
|
||||
for feature_list in data:
|
||||
data_max.append(max(feature_list))
|
||||
data_min.append(min(feature_list))
|
||||
data_sum.append(sum(feature_list))
|
||||
count = len(feature_list)
|
||||
|
||||
avgs = [s/count for s in data_sum]
|
||||
return data_max, data_min, avgs
|
||||
|
||||
def get_data_and_scale(file_name, sample_dir):
|
||||
maxs,mins,avgs = get_max_min_from_datasets(sample_dir)
|
||||
|
||||
# Get data
|
||||
all_data = get_data(file_name)
|
||||
|
||||
# Scale data from file
|
||||
self_scaled_data = [[] for _ in range(len(all_data[0]))]
|
||||
self_maxs,self_mins = [],[]
|
||||
for feature_list, cur_max, cur_min in zip(all_data,maxs, mins):
|
||||
for i in range(len(feature_list)):
|
||||
self_scaled_data[i].append((feature_list[i]-cur_min)/(cur_max-cur_min))
|
||||
|
||||
return np.asarray(self_scaled_data)
|
||||
|
||||
def rescale_data(data, old_maxs, old_mins, new_maxs, new_mins):
|
||||
# unscale from old values, rescale by new values
|
||||
data_new_scaling = []
|
||||
for data_row in data:
|
||||
scaled_row = []
|
||||
for val, old_max,old_min, cur_max, cur_min in zip(data_row, old_maxs,old_mins, new_maxs, new_mins):
|
||||
unscaled_data = val*(old_max-old_min) + old_min
|
||||
scaled_row.append((unscaled_data-cur_min)/(cur_max-cur_min))
|
||||
|
||||
data_new_scaling.append(scaled_row)
|
||||
|
||||
return data_new_scaling
|
||||
|
||||
def sample_from_file(num_samples, file_name, sample_dir=None):
|
||||
"""
|
||||
Get a portion of the data from CSV file and scale it based on max/min of dataset.
|
||||
Duplicate samples are trimmed.
|
||||
"""
|
||||
|
||||
if sample_dir:
|
||||
maxs,mins,avgs = get_max_min_from_datasets(sample_dir)
|
||||
else:
|
||||
maxs,mins,avgs = [], [], []
|
||||
|
||||
# Get data
|
||||
all_data = get_data(file_name)
|
||||
|
||||
# Get algorithms sample points, assuming hypercube for now
|
||||
num_labels = 1
|
||||
inp_dims = len(all_data) - num_labels
|
||||
samples = np.random.rand(num_samples, inp_dims)
|
||||
|
||||
|
||||
# Scale data from file
|
||||
self_scaled_data = [[] for _ in range(len(all_data[0]))]
|
||||
self_maxs,self_mins = [],[]
|
||||
for feature_list in all_data:
|
||||
max_val = max(feature_list)
|
||||
self_maxs.append(max_val)
|
||||
min_val = min(feature_list)
|
||||
self_mins.append(min_val)
|
||||
for i in range(len(feature_list)):
|
||||
self_scaled_data[i].append((feature_list[i]-min_val)/(max_val-min_val))
|
||||
# Apply algorithm sampling points to available data
|
||||
sampled_data, unused_data = apply_samples_to_data(self_scaled_data,samples)
|
||||
|
||||
#unscale values and rescale using all available data (both sampled and unused points rescaled)
|
||||
if len(maxs)!=0 and len(mins)!=0:
|
||||
sampled_data = rescale_data(sampled_data, self_maxs,self_mins, maxs, mins)
|
||||
unused_new_scaling = rescale_data(unused_data, self_maxs,self_mins, maxs, mins)
|
||||
|
||||
return np.asarray(sampled_data), np.asarray(unused_new_scaling)
|
||||
|
||||
def get_scaled_data(file_name):
|
||||
"""Get data from CSV file and scale it based on max/min of dataset"""
|
||||
|
||||
if file_name:
|
||||
maxs,mins,avgs = get_max_min_from_file(file_name)
|
||||
else:
|
||||
maxs,mins,avgs = [], [], []
|
||||
|
||||
# Get data
|
||||
all_data = get_data(file_name)
|
||||
|
||||
# Data is scaled by max/min and data format is changed to points vs feature lists
|
||||
self_scaled_data = scale_data_and_transform(all_data)
|
||||
data_np = np.asarray(self_scaled_data)
|
||||
return data_np
|
||||
|
||||
def scale_data_and_transform(data):
|
||||
"""
|
||||
Assume data is a list of features, change to a list of points and max/min scale
|
||||
"""
|
||||
|
||||
scaled_data = [[] for _ in range(len(data[0]))]
|
||||
for feature_list in data:
|
||||
max_val = max(feature_list)
|
||||
min_val = min(feature_list)
|
||||
|
||||
for i in range(len(feature_list)):
|
||||
if max_val == min_val:
|
||||
scaled_data[i].append(0.0)
|
||||
else:
|
||||
scaled_data[i].append((feature_list[i]-min_val)/(max_val-min_val))
|
||||
return scaled_data
|
||||
|
||||
def scale_input_datapoint(point, file_path):
|
||||
"""
|
||||
Input data has no output and needs to be scaled like the model inputs during
|
||||
training.
|
||||
"""
|
||||
maxs, mins, avgs = get_max_min_from_file(file_path)
|
||||
debug.info(3, "maxs={}".format(maxs))
|
||||
debug.info(3, "mins={}".format(mins))
|
||||
debug.info(3, "point={}".format(point))
|
||||
|
||||
scaled_point = []
|
||||
for feature, mx, mn in zip(point, maxs, mins):
|
||||
if mx == mn:
|
||||
scaled_point.append(0.0)
|
||||
else:
|
||||
scaled_point.append((feature-mn)/(mx-mn))
|
||||
return scaled_point
|
||||
|
||||
def unscale_data(data, file_path, pos=None):
|
||||
if file_path:
|
||||
maxs,mins,avgs = get_max_min_from_file(file_path)
|
||||
else:
|
||||
debug.error("Must provide reference data to unscale")
|
||||
return None
|
||||
|
||||
# Hard coded to only convert the last max/min (i.e. the label of the data)
|
||||
if pos == None:
|
||||
maxs,mins,avgs = maxs[-1],mins[-1],avgs[-1]
|
||||
else:
|
||||
maxs,mins,avgs = maxs[pos],mins[pos],avgs[pos]
|
||||
unscaled_data = []
|
||||
for data_row in data:
|
||||
unscaled_val = data_row*(maxs-mins) + mins
|
||||
unscaled_data.append(unscaled_val)
|
||||
|
||||
return unscaled_data
|
||||
|
||||
def abs_error(labels, preds):
|
||||
total_error = 0
|
||||
for label_i, pred_i in zip(labels, preds):
|
||||
cur_error = abs(label_i[0]-pred_i[0])/label_i[0]
|
||||
total_error += cur_error
|
||||
return total_error/len(labels)
|
||||
|
||||
def max_error(labels, preds):
|
||||
mx_error = 0
|
||||
for label_i, pred_i in zip(labels, preds):
|
||||
cur_error = abs(label_i[0]-pred_i[0])/label_i[0]
|
||||
mx_error = max(cur_error, mx_error)
|
||||
return mx_error
|
||||
|
||||
def min_error(labels, preds):
|
||||
mn_error = 1
|
||||
for label_i, pred_i in zip(labels, preds):
|
||||
cur_error = abs(label_i[0]-pred_i[0])/label_i[0]
|
||||
mn_error = min(cur_error, mn_error)
|
||||
return mn_error
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
import csv
|
||||
import math
|
||||
import numpy as np
|
||||
from openram import debug
|
||||
|
||||
|
||||
process_transform = {'SS':0.0, 'TT': 0.5, 'FF':1.0}
|
||||
|
||||
def get_data_names(file_name, exclude_area=True):
|
||||
"""
|
||||
Returns just the data names in the first row of the CSV
|
||||
"""
|
||||
|
||||
with open(file_name, newline='') as csvfile:
|
||||
csv_reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
|
||||
row_iter = 0
|
||||
# reader is iterable not a list, probably a better way to do this
|
||||
for row in csv_reader:
|
||||
# Return names from first row
|
||||
names = row[0].split(',')
|
||||
break
|
||||
if exclude_area:
|
||||
try:
|
||||
area_ind = names.index('area')
|
||||
except ValueError:
|
||||
area_ind = -1
|
||||
|
||||
if area_ind != -1:
|
||||
names = names[:area_ind] + names[area_ind+1:]
|
||||
return names
|
||||
|
||||
def get_data(file_name):
|
||||
"""
|
||||
Returns data in CSV as lists of features
|
||||
"""
|
||||
|
||||
with open(file_name, newline='') as csvfile:
|
||||
csv_reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
|
||||
row_iter = 0
|
||||
removed_items = 1
|
||||
for row in csv_reader:
|
||||
row_iter += 1
|
||||
if row_iter == 1:
|
||||
feature_names = row[0].split(',')
|
||||
input_list = [[] for _ in range(len(feature_names)-removed_items)]
|
||||
try:
|
||||
# Save to remove area
|
||||
area_ind = feature_names.index('area')
|
||||
except ValueError:
|
||||
area_ind = -1
|
||||
|
||||
try:
|
||||
process_ind = feature_names.index('process')
|
||||
except:
|
||||
debug.error('Process not included as a feature.')
|
||||
continue
|
||||
|
||||
|
||||
|
||||
data = []
|
||||
split_str = row[0].split(',')
|
||||
for i in range(len(split_str)):
|
||||
if i == process_ind:
|
||||
data.append(process_transform[split_str[i]])
|
||||
elif i == area_ind:
|
||||
continue
|
||||
else:
|
||||
data.append(float(split_str[i]))
|
||||
|
||||
data[0] = math.log(data[0], 2)
|
||||
|
||||
for i in range(len(data)):
|
||||
input_list[i].append(data[i])
|
||||
|
||||
return input_list
|
||||
|
||||
def apply_samples_to_data(all_data, algo_samples):
|
||||
# Take samples from algorithm and match them to samples in data
|
||||
data_samples, unused_data = [], []
|
||||
sample_positions = set()
|
||||
for sample in algo_samples:
|
||||
sample_positions.add(find_sample_position_with_min_error(all_data, sample))
|
||||
|
||||
for i in range(len(all_data)):
|
||||
if i in sample_positions:
|
||||
data_samples.append(all_data[i])
|
||||
else:
|
||||
unused_data.append(all_data[i])
|
||||
|
||||
return data_samples, unused_data
|
||||
|
||||
def find_sample_position_with_min_error(data, sampled_vals):
|
||||
min_error = 0
|
||||
sample_pos = 0
|
||||
count = 0
|
||||
for data_slice in data:
|
||||
error = squared_error(data_slice, sampled_vals)
|
||||
if min_error == 0 or error < min_error:
|
||||
min_error = error
|
||||
sample_pos = count
|
||||
count += 1
|
||||
return sample_pos
|
||||
|
||||
def squared_error(list_a, list_b):
|
||||
error_sum = 0;
|
||||
for a,b in zip(list_a, list_b):
|
||||
error_sum+=(a-b)**2
|
||||
return error_sum
|
||||
|
||||
|
||||
def get_max_min_from_datasets(dir):
|
||||
if not os.path.isdir(dir):
|
||||
debug.warning("Input Directory not found:{}".format(dir))
|
||||
return [], [], []
|
||||
|
||||
# Assuming all files are CSV
|
||||
data_files = [f for f in os.listdir(dir) if os.path.isfile(os.path.join(dir, f))]
|
||||
maxs,mins,sums,total_count = [],[],[],0
|
||||
for file in data_files:
|
||||
data = get_data(os.path.join(dir, file))
|
||||
# Get max, min, sum, and count from every file
|
||||
data_max, data_min, data_sum, count = [],[],[], 0
|
||||
for feature_list in data:
|
||||
data_max.append(max(feature_list))
|
||||
data_min.append(min(feature_list))
|
||||
data_sum.append(sum(feature_list))
|
||||
count = len(feature_list)
|
||||
|
||||
# Aggregate the data
|
||||
if not maxs or not mins or not sums:
|
||||
maxs,mins,sums,total_count = data_max,data_min,data_sum,count
|
||||
else:
|
||||
for i in range(len(maxs)):
|
||||
maxs[i] = max(data_max[i], maxs[i])
|
||||
mins[i] = min(data_min[i], mins[i])
|
||||
sums[i] = data_sum[i]+sums[i]
|
||||
total_count+=count
|
||||
|
||||
avgs = [s/total_count for s in sums]
|
||||
return maxs,mins,avgs
|
||||
|
||||
def get_max_min_from_file(path):
|
||||
if not os.path.isfile(path):
|
||||
debug.warning("Input file not found: {}".format(path))
|
||||
return [], [], []
|
||||
|
||||
|
||||
data = get_data(path)
|
||||
# Get max, min, sum, and count from every file
|
||||
data_max, data_min, data_sum, count = [],[],[], 0
|
||||
for feature_list in data:
|
||||
data_max.append(max(feature_list))
|
||||
data_min.append(min(feature_list))
|
||||
data_sum.append(sum(feature_list))
|
||||
count = len(feature_list)
|
||||
|
||||
avgs = [s/count for s in data_sum]
|
||||
return data_max, data_min, avgs
|
||||
|
||||
def get_data_and_scale(file_name, sample_dir):
|
||||
maxs,mins,avgs = get_max_min_from_datasets(sample_dir)
|
||||
|
||||
# Get data
|
||||
all_data = get_data(file_name)
|
||||
|
||||
# Scale data from file
|
||||
self_scaled_data = [[] for _ in range(len(all_data[0]))]
|
||||
self_maxs,self_mins = [],[]
|
||||
for feature_list, cur_max, cur_min in zip(all_data,maxs, mins):
|
||||
for i in range(len(feature_list)):
|
||||
self_scaled_data[i].append((feature_list[i]-cur_min)/(cur_max-cur_min))
|
||||
|
||||
return np.asarray(self_scaled_data)
|
||||
|
||||
def rescale_data(data, old_maxs, old_mins, new_maxs, new_mins):
|
||||
# unscale from old values, rescale by new values
|
||||
data_new_scaling = []
|
||||
for data_row in data:
|
||||
scaled_row = []
|
||||
for val, old_max,old_min, cur_max, cur_min in zip(data_row, old_maxs,old_mins, new_maxs, new_mins):
|
||||
unscaled_data = val*(old_max-old_min) + old_min
|
||||
scaled_row.append((unscaled_data-cur_min)/(cur_max-cur_min))
|
||||
|
||||
data_new_scaling.append(scaled_row)
|
||||
|
||||
return data_new_scaling
|
||||
|
||||
def sample_from_file(num_samples, file_name, sample_dir=None):
|
||||
"""
|
||||
Get a portion of the data from CSV file and scale it based on max/min of dataset.
|
||||
Duplicate samples are trimmed.
|
||||
"""
|
||||
|
||||
if sample_dir:
|
||||
maxs,mins,avgs = get_max_min_from_datasets(sample_dir)
|
||||
else:
|
||||
maxs,mins,avgs = [], [], []
|
||||
|
||||
# Get data
|
||||
all_data = get_data(file_name)
|
||||
|
||||
# Get algorithms sample points, assuming hypercube for now
|
||||
num_labels = 1
|
||||
inp_dims = len(all_data) - num_labels
|
||||
samples = np.random.rand(num_samples, inp_dims)
|
||||
|
||||
|
||||
# Scale data from file
|
||||
self_scaled_data = [[] for _ in range(len(all_data[0]))]
|
||||
self_maxs,self_mins = [],[]
|
||||
for feature_list in all_data:
|
||||
max_val = max(feature_list)
|
||||
self_maxs.append(max_val)
|
||||
min_val = min(feature_list)
|
||||
self_mins.append(min_val)
|
||||
for i in range(len(feature_list)):
|
||||
self_scaled_data[i].append((feature_list[i]-min_val)/(max_val-min_val))
|
||||
# Apply algorithm sampling points to available data
|
||||
sampled_data, unused_data = apply_samples_to_data(self_scaled_data,samples)
|
||||
|
||||
#unscale values and rescale using all available data (both sampled and unused points rescaled)
|
||||
if len(maxs)!=0 and len(mins)!=0:
|
||||
sampled_data = rescale_data(sampled_data, self_maxs,self_mins, maxs, mins)
|
||||
unused_new_scaling = rescale_data(unused_data, self_maxs,self_mins, maxs, mins)
|
||||
|
||||
return np.asarray(sampled_data), np.asarray(unused_new_scaling)
|
||||
|
||||
def get_scaled_data(file_name):
|
||||
"""Get data from CSV file and scale it based on max/min of dataset"""
|
||||
|
||||
if file_name:
|
||||
maxs,mins,avgs = get_max_min_from_file(file_name)
|
||||
else:
|
||||
maxs,mins,avgs = [], [], []
|
||||
|
||||
# Get data
|
||||
all_data = get_data(file_name)
|
||||
|
||||
# Data is scaled by max/min and data format is changed to points vs feature lists
|
||||
self_scaled_data = scale_data_and_transform(all_data)
|
||||
data_np = np.asarray(self_scaled_data)
|
||||
return data_np
|
||||
|
||||
def scale_data_and_transform(data):
|
||||
"""
|
||||
Assume data is a list of features, change to a list of points and max/min scale
|
||||
"""
|
||||
|
||||
scaled_data = [[] for _ in range(len(data[0]))]
|
||||
for feature_list in data:
|
||||
max_val = max(feature_list)
|
||||
min_val = min(feature_list)
|
||||
|
||||
for i in range(len(feature_list)):
|
||||
if max_val == min_val:
|
||||
scaled_data[i].append(0.0)
|
||||
else:
|
||||
scaled_data[i].append((feature_list[i]-min_val)/(max_val-min_val))
|
||||
return scaled_data
|
||||
|
||||
def scale_input_datapoint(point, file_path):
|
||||
"""
|
||||
Input data has no output and needs to be scaled like the model inputs during
|
||||
training.
|
||||
"""
|
||||
maxs, mins, avgs = get_max_min_from_file(file_path)
|
||||
debug.info(3, "maxs={}".format(maxs))
|
||||
debug.info(3, "mins={}".format(mins))
|
||||
debug.info(3, "point={}".format(point))
|
||||
|
||||
scaled_point = []
|
||||
for feature, mx, mn in zip(point, maxs, mins):
|
||||
if mx == mn:
|
||||
scaled_point.append(0.0)
|
||||
else:
|
||||
scaled_point.append((feature-mn)/(mx-mn))
|
||||
return scaled_point
|
||||
|
||||
def unscale_data(data, file_path, pos=None):
|
||||
if file_path:
|
||||
maxs,mins,avgs = get_max_min_from_file(file_path)
|
||||
else:
|
||||
debug.error("Must provide reference data to unscale")
|
||||
return None
|
||||
|
||||
# Hard coded to only convert the last max/min (i.e. the label of the data)
|
||||
if pos == None:
|
||||
maxs,mins,avgs = maxs[-1],mins[-1],avgs[-1]
|
||||
else:
|
||||
maxs,mins,avgs = maxs[pos],mins[pos],avgs[pos]
|
||||
unscaled_data = []
|
||||
for data_row in data:
|
||||
unscaled_val = data_row*(maxs-mins) + mins
|
||||
unscaled_data.append(unscaled_val)
|
||||
|
||||
return unscaled_data
|
||||
|
||||
def abs_error(labels, preds):
|
||||
total_error = 0
|
||||
for label_i, pred_i in zip(labels, preds):
|
||||
cur_error = abs(label_i[0]-pred_i[0])/label_i[0]
|
||||
total_error += cur_error
|
||||
return total_error/len(labels)
|
||||
|
||||
def max_error(labels, preds):
|
||||
mx_error = 0
|
||||
for label_i, pred_i in zip(labels, preds):
|
||||
cur_error = abs(label_i[0]-pred_i[0])/label_i[0]
|
||||
mx_error = max(cur_error, mx_error)
|
||||
return mx_error
|
||||
|
||||
def min_error(labels, preds):
|
||||
mn_error = 1
|
||||
for label_i, pred_i in zip(labels, preds):
|
||||
cur_error = abs(label_i[0]-pred_i[0])/label_i[0]
|
||||
mn_error = min(cur_error, mn_error)
|
||||
return mn_error
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
|
|||
|
|
@ -1,30 +1,29 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2019 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
from .simulation import simulation
|
||||
from globals import OPTS
|
||||
import debug
|
||||
import tech
|
||||
|
||||
import math
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
from .simulation import simulation
|
||||
|
||||
class cacti(simulation):
|
||||
|
||||
class cacti(simulation):
|
||||
"""
|
||||
Delay model for the SRAM which which
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self, sram, spfile, corner):
|
||||
super().__init__(sram, spfile, corner)
|
||||
|
||||
# self.targ_read_ports = []
|
||||
# self.targ_write_ports = []
|
||||
# self.period = 0
|
||||
# if self.write_size:
|
||||
# if self.write_size != self.word_size:
|
||||
# self.num_wmasks = int(math.ceil(self.word_size / self.write_size))
|
||||
# else:
|
||||
# self.num_wmasks = 0
|
||||
|
|
@ -33,8 +32,8 @@ class cacti(simulation):
|
|||
self.create_signal_names()
|
||||
self.add_graph_exclusions()
|
||||
self.set_params()
|
||||
|
||||
def set_params(self):
|
||||
|
||||
def set_params(self):
|
||||
"""Set parameters specific to the corner being simulated"""
|
||||
self.params = {}
|
||||
# Set the specific functions to use for timing defined in the SRAM module
|
||||
|
|
@ -42,16 +41,16 @@ class cacti(simulation):
|
|||
# Only parameter right now is r_on which is dependent on Vdd
|
||||
self.params["r_nch_on"] = self.vdd_voltage / tech.spice["i_on_n"]
|
||||
self.params["r_pch_on"] = self.vdd_voltage / tech.spice["i_on_p"]
|
||||
|
||||
|
||||
def get_lib_values(self, load_slews):
|
||||
"""
|
||||
Return the analytical model results for the SRAM.
|
||||
"""
|
||||
if OPTS.num_rw_ports > 1 or OPTS.num_w_ports > 0 and OPTS.num_r_ports > 0:
|
||||
debug.warning("In analytical mode, all ports have the timing of the first read port.")
|
||||
|
||||
|
||||
# Probe set to 0th bit, does not matter for analytical delay.
|
||||
self.set_probe('0' * self.addr_size, 0)
|
||||
self.set_probe('0' * self.bank_addr_size, 0)
|
||||
self.create_graph()
|
||||
self.set_internal_spice_names()
|
||||
self.create_measurement_names()
|
||||
|
|
@ -77,7 +76,7 @@ class cacti(simulation):
|
|||
slew = 0
|
||||
path_delays = self.graph.get_timing(bl_path, self.corner, slew, load_farad, self.params)
|
||||
total_delay = self.sum_delays(path_delays)
|
||||
|
||||
|
||||
delay_ns = total_delay.delay/1e-9
|
||||
slew_ns = total_delay.slew/1e-9
|
||||
max_delay = max(max_delay, total_delay.delay)
|
||||
|
|
@ -95,7 +94,7 @@ class cacti(simulation):
|
|||
elif "slew" in mname and port in self.read_ports:
|
||||
port_data[port][mname].append(total_delay.slew / 1e-9)
|
||||
|
||||
# Margin for error in period. Calculated by averaging required margin for a small and large
|
||||
# Margin for error in period. Calculated by averaging required margin for a small and large
|
||||
# memory. FIXME: margin is quite large, should be looked into.
|
||||
period_margin = 1.85
|
||||
sram_data = {"min_period": (max_delay / 1e-9) * 2 * period_margin,
|
||||
|
|
@ -118,5 +117,3 @@ class cacti(simulation):
|
|||
debug.info(1, "Dynamic Power: {0} mW".format(power.dynamic))
|
||||
debug.info(1, "Leakage Power: {0} mW".format(power.leakage))
|
||||
return power
|
||||
|
||||
|
||||
|
|
@ -1,14 +1,14 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
import re
|
||||
import debug
|
||||
from globals import OPTS
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
|
||||
|
||||
def relative_compare(value1, value2, error_tolerance=0.001):
|
||||
|
|
@ -37,7 +37,7 @@ def parse_spice_list(filename, key):
|
|||
except IOError:
|
||||
debug.error("Unable to open spice output file: {0}".format(full_filename),1)
|
||||
debug.archive()
|
||||
|
||||
|
||||
contents = f.read().lower()
|
||||
f.close()
|
||||
# val = re.search(r"{0}\s*=\s*(-?\d+.?\d*\S*)\s+.*".format(key), contents)
|
||||
|
|
|
|||
|
|
@ -1,20 +1,20 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import shutil
|
||||
import debug
|
||||
import tech
|
||||
import math
|
||||
import shutil
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
from .stimuli import *
|
||||
from .trim_spice import *
|
||||
from .charutils import *
|
||||
from .sram_op import *
|
||||
from .bit_polarity import *
|
||||
from globals import OPTS
|
||||
from .simulation import simulation
|
||||
from .measurements import *
|
||||
|
||||
|
|
@ -43,7 +43,7 @@ class delay(simulation):
|
|||
self.targ_read_ports = []
|
||||
self.targ_write_ports = []
|
||||
self.period = 0
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.num_wmasks = int(math.ceil(self.word_size / self.write_size))
|
||||
else:
|
||||
self.num_wmasks = 0
|
||||
|
|
@ -235,10 +235,10 @@ class delay(simulation):
|
|||
qbar_meas = voltage_at_measure("v_qbar_{0}".format(meas_tag), qbar_name)
|
||||
|
||||
return {bit_polarity.NONINVERTING: q_meas, bit_polarity.INVERTING: qbar_meas}
|
||||
|
||||
|
||||
def create_sen_and_bitline_path_measures(self):
|
||||
"""Create measurements for the s_en and bitline paths for individual delays per stage."""
|
||||
|
||||
|
||||
# FIXME: There should be a default_read_port variable in this case, pathing is done with this
|
||||
# but is never mentioned otherwise
|
||||
port = self.read_ports[0]
|
||||
|
|
@ -253,37 +253,37 @@ class delay(simulation):
|
|||
debug.check(len(bl_paths)==1, 'Found {0} paths which contain the bitline net.'.format(len(bl_paths)))
|
||||
sen_path = sen_paths[0]
|
||||
bitline_path = bl_paths[0]
|
||||
|
||||
|
||||
# Get the measures
|
||||
self.sen_path_meas = self.create_delay_path_measures(sen_path)
|
||||
self.bl_path_meas = self.create_delay_path_measures(bitline_path)
|
||||
all_meas = self.sen_path_meas + self.bl_path_meas
|
||||
|
||||
|
||||
# Paths could have duplicate measurements, remove them before they go to the stim file
|
||||
all_meas = self.remove_duplicate_meas_names(all_meas)
|
||||
# FIXME: duplicate measurements still exist in the member variables, since they have the same
|
||||
# name it will still work, but this could cause an issue in the future.
|
||||
|
||||
return all_meas
|
||||
|
||||
return all_meas
|
||||
|
||||
def remove_duplicate_meas_names(self, measures):
|
||||
"""Returns new list of measurements without duplicate names"""
|
||||
|
||||
|
||||
name_set = set()
|
||||
unique_measures = []
|
||||
for meas in measures:
|
||||
if meas.name not in name_set:
|
||||
name_set.add(meas.name)
|
||||
unique_measures.append(meas)
|
||||
|
||||
|
||||
return unique_measures
|
||||
|
||||
|
||||
def create_delay_path_measures(self, path):
|
||||
"""Creates measurements for each net along given path."""
|
||||
|
||||
# Determine the directions (RISE/FALL) of signals
|
||||
path_dirs = self.get_meas_directions(path)
|
||||
|
||||
|
||||
# Create the measurements
|
||||
path_meas = []
|
||||
for i in range(len(path) - 1):
|
||||
|
|
@ -297,26 +297,26 @@ class delay(simulation):
|
|||
# Some bitcell logic is hardcoded for only read zeroes, force that here as well.
|
||||
path_meas[-1].meta_str = sram_op.READ_ZERO
|
||||
path_meas[-1].meta_add_delay = True
|
||||
|
||||
|
||||
return path_meas
|
||||
|
||||
|
||||
def get_meas_directions(self, path):
|
||||
"""Returns SPICE measurements directions based on path."""
|
||||
|
||||
|
||||
# Get the edges modules which define the path
|
||||
edge_mods = self.graph.get_edge_mods(path)
|
||||
|
||||
|
||||
# Convert to booleans based on function of modules (inverting/non-inverting)
|
||||
mod_type_bools = [mod.is_non_inverting() for mod in edge_mods]
|
||||
|
||||
|
||||
# FIXME: obtuse hack to differentiate s_en input from bitline in sense amps
|
||||
if self.sen_name in path:
|
||||
# Force the sense amp to be inverting for s_en->DOUT.
|
||||
# Force the sense amp to be inverting for s_en->DOUT.
|
||||
# bitline->DOUT is non-inverting, but the module cannot differentiate inputs.
|
||||
s_en_index = path.index(self.sen_name)
|
||||
mod_type_bools[s_en_index] = False
|
||||
debug.info(2, 'Forcing sen->dout to be inverting.')
|
||||
|
||||
|
||||
# Use these to determine direction list assuming delay start on neg. edge of clock (FALL)
|
||||
# Also, use shorthand that 'FALL' == False, 'RISE' == True to simplify logic
|
||||
bool_dirs = [False]
|
||||
|
|
@ -324,9 +324,9 @@ class delay(simulation):
|
|||
for mod_bool in mod_type_bools:
|
||||
cur_dir = (cur_dir == mod_bool)
|
||||
bool_dirs.append(cur_dir)
|
||||
|
||||
|
||||
# Convert from boolean to string
|
||||
return ['RISE' if dbool else 'FALL' for dbool in bool_dirs]
|
||||
return ['RISE' if dbool else 'FALL' for dbool in bool_dirs]
|
||||
|
||||
def set_load_slew(self, load, slew):
|
||||
""" Set the load and slew """
|
||||
|
|
@ -342,7 +342,7 @@ class delay(simulation):
|
|||
except ValueError:
|
||||
debug.error("Probe Address is not of binary form: {0}".format(self.probe_address), 1)
|
||||
|
||||
if len(self.probe_address) != self.addr_size:
|
||||
if len(self.probe_address) != self.bank_addr_size:
|
||||
debug.error("Probe Address's number of bits does not correspond to given SRAM", 1)
|
||||
|
||||
if not isinstance(self.probe_data, int) or self.probe_data>self.word_size or self.probe_data<0:
|
||||
|
|
@ -455,7 +455,7 @@ class delay(simulation):
|
|||
self.stim.gen_constant(sig_name="{0}{1}_{2} ".format(self.din_name, write_port, i),
|
||||
v_val=0)
|
||||
for port in self.all_ports:
|
||||
for i in range(self.addr_size):
|
||||
for i in range(self.bank_addr_size):
|
||||
self.stim.gen_constant(sig_name="{0}{1}_{2}".format(self.addr_name, port, i),
|
||||
v_val=0)
|
||||
|
||||
|
|
@ -827,7 +827,7 @@ class delay(simulation):
|
|||
debug.error("Failed to Measure Read Port Values:\n\t\t{0}".format(read_port_dict), 1)
|
||||
|
||||
result[port].update(read_port_dict)
|
||||
|
||||
|
||||
self.path_delays = self.check_path_measures()
|
||||
|
||||
return (True, result)
|
||||
|
|
@ -932,7 +932,7 @@ class delay(simulation):
|
|||
|
||||
def check_path_measures(self):
|
||||
"""Get and check all the delays along the sen and bitline paths"""
|
||||
|
||||
|
||||
# Get and set measurement, no error checking done other than prints.
|
||||
debug.info(2, "Checking measures in Delay Path")
|
||||
value_dict = {}
|
||||
|
|
@ -1179,7 +1179,7 @@ class delay(simulation):
|
|||
#char_sram_data["sen_path_names"] = sen_names
|
||||
# FIXME: low-to-high delays are altered to be independent of the period. This makes the lib results less accurate.
|
||||
self.alter_lh_char_data(char_port_data)
|
||||
|
||||
|
||||
return (char_sram_data, char_port_data)
|
||||
|
||||
def alter_lh_char_data(self, char_port_data):
|
||||
|
|
@ -1222,14 +1222,14 @@ class delay(simulation):
|
|||
for meas in self.sen_path_meas:
|
||||
sen_name_list.append(meas.name)
|
||||
sen_delay_list.append(value_dict[meas.name])
|
||||
|
||||
|
||||
bl_name_list = []
|
||||
bl_delay_list = []
|
||||
for meas in self.bl_path_meas:
|
||||
bl_name_list.append(meas.name)
|
||||
bl_delay_list.append(value_dict[meas.name])
|
||||
|
||||
return sen_name_list, sen_delay_list, bl_name_list, bl_delay_list
|
||||
return sen_name_list, sen_delay_list, bl_name_list, bl_delay_list
|
||||
|
||||
def calculate_inverse_address(self):
|
||||
"""Determine dummy test address based on probe address and column mux size."""
|
||||
|
|
@ -1391,7 +1391,7 @@ class delay(simulation):
|
|||
"""
|
||||
|
||||
for port in self.all_ports:
|
||||
for i in range(self.addr_size):
|
||||
for i in range(self.bank_addr_size):
|
||||
sig_name = "{0}{1}_{2}".format(self.addr_name, port, i)
|
||||
self.stim.gen_pwl(sig_name, self.cycle_times, self.addr_values[port][i], self.period, self.slew, 0.05)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,27 +1,27 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2019 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
from .simulation import simulation
|
||||
from globals import OPTS
|
||||
import debug
|
||||
|
||||
class elmore(simulation):
|
||||
|
||||
class elmore(simulation):
|
||||
"""
|
||||
Delay model for the SRAM which calculates Elmore delays along the SRAM critical path.
|
||||
"""
|
||||
|
||||
|
||||
def __init__(self, sram, spfile, corner):
|
||||
super().__init__(sram, spfile, corner)
|
||||
|
||||
# self.targ_read_ports = []
|
||||
# self.targ_write_ports = []
|
||||
# self.period = 0
|
||||
# if self.write_size:
|
||||
# if self.write_size != self.word_size:
|
||||
# self.num_wmasks = int(math.ceil(self.word_size / self.write_size))
|
||||
# else:
|
||||
# self.num_wmasks = 0
|
||||
|
|
@ -30,13 +30,13 @@ class elmore(simulation):
|
|||
self.set_corner(corner)
|
||||
self.create_signal_names()
|
||||
self.add_graph_exclusions()
|
||||
|
||||
def set_params(self):
|
||||
|
||||
def set_params(self):
|
||||
"""Set parameters specific to the corner being simulated"""
|
||||
self.params = {}
|
||||
# Set the specific functions to use for timing defined in the SRAM module
|
||||
self.params["model_name"] = OPTS.model_name
|
||||
|
||||
|
||||
def get_lib_values(self, load_slews):
|
||||
"""
|
||||
Return the analytical model results for the SRAM.
|
||||
|
|
@ -45,7 +45,7 @@ class elmore(simulation):
|
|||
debug.warning("In analytical mode, all ports have the timing of the first read port.")
|
||||
|
||||
# Probe set to 0th bit, does not matter for analytical delay.
|
||||
self.set_probe('0' * self.addr_size, 0)
|
||||
self.set_probe('0' * self.bank_addr_size, 0)
|
||||
self.create_graph()
|
||||
self.set_internal_spice_names()
|
||||
self.create_measurement_names()
|
||||
|
|
@ -66,7 +66,7 @@ class elmore(simulation):
|
|||
for load,slew in load_slews:
|
||||
# Calculate delay based on slew and load
|
||||
path_delays = self.graph.get_timing(bl_path, self.corner, slew, load, self.params)
|
||||
|
||||
|
||||
total_delay = self.sum_delays(path_delays)
|
||||
max_delay = max(max_delay, total_delay.delay)
|
||||
debug.info(1,
|
||||
|
|
@ -84,7 +84,7 @@ class elmore(simulation):
|
|||
elif "slew" in mname and port in self.read_ports:
|
||||
port_data[port][mname].append(total_delay.slew / 1e3)
|
||||
|
||||
# Margin for error in period. Calculated by averaging required margin for a small and large
|
||||
# Margin for error in period. Calculated by averaging required margin for a small and large
|
||||
# memory. FIXME: margin is quite large, should be looked into.
|
||||
period_margin = 1.85
|
||||
sram_data = {"min_period": (max_delay / 1e3) * 2 * period_margin,
|
||||
|
|
@ -106,4 +106,4 @@ class elmore(simulation):
|
|||
power.leakage /= 1e6
|
||||
debug.info(1, "Dynamic Power: {0} mW".format(power.dynamic))
|
||||
debug.info(1, "Leakage Power: {0} mW".format(power.leakage))
|
||||
return power
|
||||
return power
|
||||
|
|
|
|||
|
|
@ -1,18 +1,18 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import collections
|
||||
import debug
|
||||
import random
|
||||
import math
|
||||
import random
|
||||
import collections
|
||||
from numpy import binary_repr
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
from .stimuli import *
|
||||
from .charutils import *
|
||||
from globals import OPTS
|
||||
from .simulation import simulation
|
||||
|
||||
|
||||
|
|
@ -44,7 +44,7 @@ class functional(simulation):
|
|||
else:
|
||||
self.output_path = output_path
|
||||
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.num_wmasks = int(math.ceil(self.word_size / self.write_size))
|
||||
else:
|
||||
self.num_wmasks = 0
|
||||
|
|
@ -60,7 +60,7 @@ class functional(simulation):
|
|||
self.addr_spare_index = -int(math.log(self.words_per_row) / math.log(2))
|
||||
else:
|
||||
# This will select the entire address when one word per row
|
||||
self.addr_spare_index = self.addr_size
|
||||
self.addr_spare_index = self.bank_addr_size
|
||||
# If trim is set, specify the valid addresses
|
||||
self.valid_addresses = set()
|
||||
self.max_address = self.num_rows * self.words_per_row - 1
|
||||
|
|
@ -68,7 +68,7 @@ class functional(simulation):
|
|||
for i in range(self.words_per_row):
|
||||
self.valid_addresses.add(i)
|
||||
self.valid_addresses.add(self.max_address - i - 1)
|
||||
self.probe_address, self.probe_data = '0' * self.addr_size, 0
|
||||
self.probe_address, self.probe_data = '0' * self.bank_addr_size, 0
|
||||
self.set_corner(corner)
|
||||
self.set_spice_constants()
|
||||
self.set_stimulus_variables()
|
||||
|
|
@ -133,7 +133,7 @@ class functional(simulation):
|
|||
|
||||
def create_random_memory_sequence(self):
|
||||
# Select randomly, but have 3x more reads to increase probability
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
rw_ops = ["noop", "write", "partial_write", "read", "read"]
|
||||
w_ops = ["noop", "write", "partial_write"]
|
||||
else:
|
||||
|
|
@ -142,7 +142,7 @@ class functional(simulation):
|
|||
r_ops = ["noop", "read"]
|
||||
|
||||
# First cycle idle is always an idle cycle
|
||||
comment = self.gen_cycle_comment("noop", "0" * self.word_size, "0" * self.addr_size, "0" * self.num_wmasks, 0, self.t_current)
|
||||
comment = self.gen_cycle_comment("noop", "0" * self.word_size, "0" * self.bank_addr_size, "0" * self.num_wmasks, 0, self.t_current)
|
||||
self.add_noop_all_ports(comment)
|
||||
|
||||
|
||||
|
|
@ -244,7 +244,7 @@ class functional(simulation):
|
|||
self.t_current += self.period
|
||||
|
||||
# Last cycle idle needed to correctly measure the value on the second to last clock edge
|
||||
comment = self.gen_cycle_comment("noop", "0" * self.word_size, "0" * self.addr_size, "0" * self.num_wmasks, 0, self.t_current)
|
||||
comment = self.gen_cycle_comment("noop", "0" * self.word_size, "0" * self.bank_addr_size, "0" * self.num_wmasks, 0, self.t_current)
|
||||
self.add_noop_all_ports(comment)
|
||||
|
||||
def gen_masked_data(self, old_word, word, wmask):
|
||||
|
|
@ -363,10 +363,10 @@ class functional(simulation):
|
|||
def gen_addr(self):
|
||||
""" Generates a random address value to write to. """
|
||||
if self.valid_addresses:
|
||||
random_value = random.sample(self.valid_addresses, 1)[0]
|
||||
random_value = random.sample(list(self.valid_addresses), 1)[0]
|
||||
else:
|
||||
random_value = random.randint(0, self.max_address)
|
||||
addr_bits = binary_repr(random_value, self.addr_size)
|
||||
addr_bits = binary_repr(random_value, self.bank_addr_size)
|
||||
return addr_bits
|
||||
|
||||
def get_data(self):
|
||||
|
|
@ -426,7 +426,7 @@ class functional(simulation):
|
|||
|
||||
# Generate address bits
|
||||
for port in self.all_ports:
|
||||
for bit in range(self.addr_size):
|
||||
for bit in range(self.bank_addr_size):
|
||||
sig_name="{0}{1}_{2} ".format(self.addr_name, port, bit)
|
||||
self.stim.gen_pwl(sig_name, self.cycle_times, self.addr_values[port][bit], self.period, self.slew, 0.05)
|
||||
|
||||
|
|
@ -440,7 +440,7 @@ class functional(simulation):
|
|||
|
||||
# Generate wmask bits
|
||||
for port in self.write_ports:
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.sf.write("\n* Generation of wmask signals\n")
|
||||
for bit in range(self.num_wmasks):
|
||||
sig_name = "WMASK{0}_{1} ".format(port, bit)
|
||||
|
|
|
|||
|
|
@ -1,21 +1,21 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os,sys,re
|
||||
import os, sys, re
|
||||
import time
|
||||
import debug
|
||||
import datetime
|
||||
import numpy as np
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram.tech import spice
|
||||
from openram import OPTS
|
||||
from .setup_hold import *
|
||||
from .delay import *
|
||||
from .charutils import *
|
||||
import tech
|
||||
import numpy as np
|
||||
from globals import OPTS
|
||||
from tech import spice
|
||||
|
||||
|
||||
class lib:
|
||||
|
|
@ -183,7 +183,8 @@ class lib:
|
|||
# set the read and write port as inputs.
|
||||
self.write_data_bus(port)
|
||||
self.write_addr_bus(port)
|
||||
if self.sram.write_size and port in self.write_ports:
|
||||
if self.sram.write_size != self.sram.word_size and \
|
||||
port in self.write_ports:
|
||||
self.write_wmask_bus(port)
|
||||
# need to split this into sram and port control signals
|
||||
self.write_control_pins(port)
|
||||
|
|
@ -193,8 +194,8 @@ class lib:
|
|||
|
||||
def write_footer(self):
|
||||
""" Write the footer """
|
||||
self.lib.write(" }\n") #Closing brace for the cell
|
||||
self.lib.write("}\n") #Closing brace for the library
|
||||
self.lib.write(" }\n") # Closing brace for the cell
|
||||
self.lib.write("}\n") # Closing brace for the library
|
||||
|
||||
def write_header(self):
|
||||
""" Write the header information """
|
||||
|
|
@ -378,7 +379,7 @@ class lib:
|
|||
self.lib.write(" bit_to : 0;\n")
|
||||
self.lib.write(" }\n\n")
|
||||
|
||||
if self.sram.write_size:
|
||||
if self.sram.write_size != self.sram.word_size:
|
||||
self.lib.write(" type (wmask){\n")
|
||||
self.lib.write(" base_type : array;\n")
|
||||
self.lib.write(" data_type : bit;\n")
|
||||
|
|
|
|||
|
|
@ -1,17 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2019 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
from .regression_model import regression_model
|
||||
from sklearn.linear_model import Ridge
|
||||
from globals import OPTS
|
||||
import debug
|
||||
|
||||
from sklearn.linear_model import LinearRegression
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
from .regression_model import regression_model
|
||||
|
||||
|
||||
class linear_regression(regression_model):
|
||||
|
|
@ -26,18 +24,17 @@ class linear_regression(regression_model):
|
|||
"""
|
||||
Supervised training of model.
|
||||
"""
|
||||
|
||||
|
||||
#model = LinearRegression()
|
||||
model = self.get_model()
|
||||
model.fit(features, labels)
|
||||
return model
|
||||
|
||||
def model_prediction(self, model, features):
|
||||
|
||||
def model_prediction(self, model, features):
|
||||
"""
|
||||
Have the model perform a prediction and unscale the prediction
|
||||
as the model is trained with scaled values.
|
||||
"""
|
||||
|
||||
|
||||
pred = model.predict(features)
|
||||
return pred
|
||||
|
||||
|
|
@ -1,16 +1,17 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from tech import drc, parameter, spice
|
||||
from abc import ABC, abstractmethod
|
||||
from openram import debug
|
||||
from openram.tech import drc, parameter, spice
|
||||
from .stimuli import *
|
||||
from .charutils import *
|
||||
|
||||
|
||||
class spice_measurement(ABC):
|
||||
"""Base class for spice stimulus measurements."""
|
||||
def __init__(self, measure_name, measure_scale=None, has_port=True):
|
||||
|
|
@ -184,7 +185,7 @@ class voltage_when_measure(spice_measurement):
|
|||
trig_voltage = self.trig_val_of_vdd * vdd_voltage
|
||||
return (meas_name, trig_name, targ_name, trig_voltage, self.trig_dir_str, trig_td)
|
||||
|
||||
|
||||
|
||||
class voltage_at_measure(spice_measurement):
|
||||
"""Generates a spice measurement to measure the voltage at a specific time.
|
||||
The time is considered variant with different periods."""
|
||||
|
|
@ -211,4 +212,3 @@ class voltage_at_measure(spice_measurement):
|
|||
meas_name = self.name
|
||||
targ_name = self.targ_name_no_port
|
||||
return (meas_name, targ_name, time_at)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,16 +1,16 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
import tech
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
from .stimuli import *
|
||||
from .trim_spice import *
|
||||
from .charutils import *
|
||||
from globals import OPTS
|
||||
from .delay import delay
|
||||
from .measurements import *
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ class model_check(delay):
|
|||
replicated here.
|
||||
"""
|
||||
delay.create_signal_names(self)
|
||||
|
||||
|
||||
# Signal names are all hardcoded, need to update to make it work for probe address and different configurations.
|
||||
wl_en_driver_signals = ["Xsram{1}Xcontrol{{}}.Xbuf_wl_en.Zb{0}_int".format(stage, OPTS.hier_seperator) for stage in range(1, self.get_num_wl_en_driver_stages())]
|
||||
wl_driver_signals = ["Xsram{2}Xbank0{2}Xwordline_driver{{}}{2}Xwl_driver_inv{0}{2}Zb{1}_int".format(self.wordline_row, stage, OPTS.hier_seperator) for stage in range(1, self.get_num_wl_driver_stages())]
|
||||
|
|
@ -448,6 +448,3 @@ class model_check(delay):
|
|||
name_dict[self.sae_model_name] = name_dict["sae_measures"]
|
||||
|
||||
return name_dict
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +1,14 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2019 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
from .regression_model import regression_model
|
||||
from globals import OPTS
|
||||
import debug
|
||||
from sklearn.neural_network import MLPRegressor
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
from .regression_model import regression_model
|
||||
|
||||
|
||||
class neural_network(regression_model):
|
||||
|
|
@ -25,20 +24,19 @@ class neural_network(regression_model):
|
|||
"""
|
||||
Training multilayer model
|
||||
"""
|
||||
|
||||
|
||||
flat_labels = np.ravel(labels)
|
||||
model = self.get_model()
|
||||
model.fit(features, flat_labels)
|
||||
|
||||
|
||||
return model
|
||||
|
||||
def model_prediction(self, model, features):
|
||||
|
||||
def model_prediction(self, model, features):
|
||||
"""
|
||||
Have the model perform a prediction and unscale the prediction
|
||||
as the model is trained with scaled values.
|
||||
"""
|
||||
|
||||
|
||||
pred = model.predict(features)
|
||||
reshape_pred = np.reshape(pred, (len(pred),1))
|
||||
return reshape_pred
|
||||
|
||||
|
|
@ -1,17 +1,16 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2019 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
import math
|
||||
from openram import debug
|
||||
from openram import OPTS
|
||||
from .analytical_util import *
|
||||
from .simulation import simulation
|
||||
from globals import OPTS
|
||||
import debug
|
||||
|
||||
import math
|
||||
|
||||
relative_data_path = "sim_data"
|
||||
data_file = "sim_data.csv"
|
||||
|
|
@ -25,7 +24,7 @@ data_fnames = ["rise_delay.csv",
|
|||
"read0_power.csv",
|
||||
"leakage_data.csv",
|
||||
"sim_time.csv"]
|
||||
# Positions must correspond to data_fname list
|
||||
# Positions must correspond to data_fname list
|
||||
lib_dnames = ["delay_lh",
|
||||
"delay_hl",
|
||||
"slew_lh",
|
||||
|
|
@ -35,13 +34,13 @@ lib_dnames = ["delay_lh",
|
|||
"read1_power",
|
||||
"read0_power",
|
||||
"leakage_power",
|
||||
"sim_time"]
|
||||
"sim_time"]
|
||||
# Check if another data dir was specified
|
||||
if OPTS.sim_data_path == None:
|
||||
if OPTS.sim_data_path == None:
|
||||
data_dir = OPTS.openram_tech+relative_data_path
|
||||
else:
|
||||
data_dir = OPTS.sim_data_path
|
||||
|
||||
data_dir = OPTS.sim_data_path
|
||||
|
||||
data_path = data_dir + '/' + data_file
|
||||
|
||||
class regression_model(simulation):
|
||||
|
|
@ -52,23 +51,23 @@ class regression_model(simulation):
|
|||
|
||||
def get_lib_values(self, load_slews):
|
||||
"""
|
||||
A model and prediction is created for each output needed for the LIB
|
||||
A model and prediction is created for each output needed for the LIB
|
||||
"""
|
||||
|
||||
|
||||
debug.info(1, "Characterizing SRAM using regression models.")
|
||||
log_num_words = math.log(OPTS.num_words, 2)
|
||||
model_inputs = [log_num_words,
|
||||
OPTS.word_size,
|
||||
model_inputs = [log_num_words,
|
||||
OPTS.word_size,
|
||||
OPTS.words_per_row,
|
||||
OPTS.local_array_size,
|
||||
process_transform[self.process],
|
||||
self.vdd_voltage,
|
||||
self.temperature]
|
||||
process_transform[self.process],
|
||||
self.vdd_voltage,
|
||||
self.temperature]
|
||||
# Area removed for now
|
||||
# self.sram.width * self.sram.height,
|
||||
# Include above inputs, plus load and slew which are added below
|
||||
self.num_inputs = len(model_inputs)+2
|
||||
|
||||
|
||||
self.create_measurement_names()
|
||||
models = self.train_models()
|
||||
|
||||
|
|
@ -85,22 +84,22 @@ class regression_model(simulation):
|
|||
port_data[port]['delay_hl'].append(sram_vals['fall_delay'])
|
||||
port_data[port]['slew_lh'].append(sram_vals['rise_slew'])
|
||||
port_data[port]['slew_hl'].append(sram_vals['fall_slew'])
|
||||
|
||||
|
||||
port_data[port]['write1_power'].append(sram_vals['write1_power'])
|
||||
port_data[port]['write0_power'].append(sram_vals['write0_power'])
|
||||
port_data[port]['read1_power'].append(sram_vals['read1_power'])
|
||||
port_data[port]['read0_power'].append(sram_vals['read0_power'])
|
||||
|
||||
|
||||
# Disabled power not modeled. Copied from other power predictions
|
||||
port_data[port]['disabled_write1_power'].append(sram_vals['write1_power'])
|
||||
port_data[port]['disabled_write0_power'].append(sram_vals['write0_power'])
|
||||
port_data[port]['disabled_read1_power'].append(sram_vals['read1_power'])
|
||||
port_data[port]['disabled_read0_power'].append(sram_vals['read0_power'])
|
||||
|
||||
debug.info(1, '{}, {}, {}, {}, {}'.format(slew,
|
||||
load,
|
||||
port,
|
||||
sram_vals['rise_delay'],
|
||||
|
||||
debug.info(1, '{}, {}, {}, {}, {}'.format(slew,
|
||||
load,
|
||||
port,
|
||||
sram_vals['rise_delay'],
|
||||
sram_vals['rise_slew']))
|
||||
# Estimate the period as double the delay with margin
|
||||
period_margin = 0.1
|
||||
|
|
@ -112,19 +111,19 @@ class regression_model(simulation):
|
|||
|
||||
return (sram_data, port_data)
|
||||
|
||||
def get_predictions(self, model_inputs, models):
|
||||
def get_predictions(self, model_inputs, models):
|
||||
"""
|
||||
Generate a model and prediction for LIB output
|
||||
"""
|
||||
|
||||
#Scaled the inputs using first data file as a reference
|
||||
|
||||
#Scaled the inputs using first data file as a reference
|
||||
scaled_inputs = np.asarray([scale_input_datapoint(model_inputs, data_path)])
|
||||
|
||||
predictions = {}
|
||||
out_pos = 0
|
||||
for dname in self.output_names:
|
||||
m = models[dname]
|
||||
|
||||
|
||||
scaled_pred = self.model_prediction(m, scaled_inputs)
|
||||
pred = unscale_data(scaled_pred.tolist(), data_path, pos=self.num_inputs+out_pos)
|
||||
debug.info(2,"Unscaled Prediction = {}".format(pred))
|
||||
|
|
@ -149,7 +148,7 @@ class regression_model(simulation):
|
|||
output_num+=1
|
||||
|
||||
return models
|
||||
|
||||
|
||||
def score_model(self):
|
||||
num_inputs = 9 #FIXME - should be defined somewhere else
|
||||
self.output_names = get_data_names(data_path)[num_inputs:]
|
||||
|
|
@ -165,15 +164,15 @@ class regression_model(simulation):
|
|||
scr = model.score(features, output_label)
|
||||
debug.info(1, "{}, {}".format(o_name, scr))
|
||||
output_num+=1
|
||||
|
||||
|
||||
|
||||
|
||||
def cross_validation(self, test_only=None):
|
||||
"""Wrapper for sklean cross validation function for OpenRAM regression models.
|
||||
Returns the mean accuracy for each model/output."""
|
||||
|
||||
|
||||
from sklearn.model_selection import cross_val_score
|
||||
untrained_model = self.get_model()
|
||||
|
||||
|
||||
num_inputs = 9 #FIXME - should be defined somewhere else
|
||||
self.output_names = get_data_names(data_path)[num_inputs:]
|
||||
data = get_scaled_data(data_path)
|
||||
|
|
@ -193,9 +192,9 @@ class regression_model(simulation):
|
|||
debug.info(1, "{}, {}, {}".format(o_name, scores.mean(), scores.std()))
|
||||
model_scores[o_name] = scores.mean()
|
||||
output_num+=1
|
||||
|
||||
return model_scores
|
||||
|
||||
|
||||
return model_scores
|
||||
|
||||
# Fixme - only will work for sklearn regression models
|
||||
def save_model(self, model_name, model):
|
||||
try:
|
||||
|
|
@ -205,4 +204,3 @@ class regression_model(simulation):
|
|||
OPTS.model_dict[model_name+"_coef"] = list(model.coef_[0])
|
||||
debug.info(1,"Coefs of {}:{}".format(model_name,OPTS.model_dict[model_name+"_coef"]))
|
||||
OPTS.model_dict[model_name+"_intercept"] = float(model.intercept_)
|
||||
|
||||
|
|
@ -1,16 +1,16 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import tech
|
||||
from openram import debug
|
||||
from openram.sram_factory import factory
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
from .stimuli import *
|
||||
import debug
|
||||
from .charutils import *
|
||||
from globals import OPTS
|
||||
from sram_factory import factory
|
||||
|
||||
|
||||
class setup_hold():
|
||||
|
|
@ -22,7 +22,7 @@ class setup_hold():
|
|||
def __init__(self, corner):
|
||||
# This must match the spice model order
|
||||
self.dff = factory.create(module_type=OPTS.dff)
|
||||
|
||||
|
||||
self.period = tech.spice["feasible_period"]
|
||||
|
||||
debug.info(2, "Feasible period from technology file: {0} ".format(self.period))
|
||||
|
|
@ -106,8 +106,8 @@ class setup_hold():
|
|||
setup=0)
|
||||
|
||||
def write_clock(self):
|
||||
"""
|
||||
Create the clock signal for setup/hold analysis.
|
||||
"""
|
||||
Create the clock signal for setup/hold analysis.
|
||||
First period initializes the FF
|
||||
while the second is used for characterization.
|
||||
"""
|
||||
|
|
@ -206,7 +206,7 @@ class setup_hold():
|
|||
|
||||
self.stim.run_sim(self.stim_sp)
|
||||
clk_to_q = convert_to_float(parse_spice_list("timing", "clk2q_delay"))
|
||||
# We use a 1/2 speed clock for some reason...
|
||||
# We use a 1/2 speed clock for some reason...
|
||||
setuphold_time = (target_time - 2 * self.period)
|
||||
if mode == "SETUP": # SETUP is clk-din, not din-clk
|
||||
passing_setuphold_time = -1 * setuphold_time
|
||||
|
|
|
|||
|
|
@ -1,16 +1,16 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
import math
|
||||
import tech
|
||||
from globals import OPTS
|
||||
from sram_factory import factory
|
||||
from base import timing_graph
|
||||
from openram import debug
|
||||
from openram.base import timing_graph
|
||||
from openram.sram_factory import factory
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
|
||||
|
||||
class simulation():
|
||||
|
|
@ -20,7 +20,7 @@ class simulation():
|
|||
|
||||
self.name = self.sram.name
|
||||
self.word_size = self.sram.word_size
|
||||
self.addr_size = self.sram.addr_size
|
||||
self.bank_addr_size = self.sram.bank_addr_size
|
||||
self.write_size = self.sram.write_size
|
||||
self.num_spare_rows = self.sram.num_spare_rows
|
||||
if not self.sram.num_spare_cols:
|
||||
|
|
@ -39,7 +39,7 @@ class simulation():
|
|||
self.words_per_row = self.sram.words_per_row
|
||||
self.num_rows = self.sram.num_rows
|
||||
self.num_cols = self.sram.num_cols
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
self.num_wmasks = int(math.ceil(self.word_size / self.write_size))
|
||||
else:
|
||||
self.num_wmasks = 0
|
||||
|
|
@ -80,7 +80,7 @@ class simulation():
|
|||
self.dout_name = "dout"
|
||||
self.pins = self.gen_pin_names(port_signal_names=(self.addr_name, self.din_name, self.dout_name),
|
||||
port_info=(len(self.all_ports), self.write_ports, self.read_ports),
|
||||
abits=self.addr_size,
|
||||
abits=self.bank_addr_size,
|
||||
dbits=self.word_size + self.num_spare_cols)
|
||||
debug.check(len(self.sram.pins) == len(self.pins),
|
||||
"Number of pins generated for characterization \
|
||||
|
|
@ -103,7 +103,7 @@ class simulation():
|
|||
self.spare_wen_value = {port: [] for port in self.write_ports}
|
||||
|
||||
# Three dimensional list to handle each addr and data bits for each port over the number of checks
|
||||
self.addr_values = {port: [[] for bit in range(self.addr_size)] for port in self.all_ports}
|
||||
self.addr_values = {port: [[] for bit in range(self.bank_addr_size)] for port in self.all_ports}
|
||||
self.data_values = {port: [[] for bit in range(self.word_size + self.num_spare_cols)] for port in self.write_ports}
|
||||
self.wmask_values = {port: [[] for bit in range(self.num_wmasks)] for port in self.write_ports}
|
||||
self.spare_wen_values = {port: [[] for bit in range(self.num_spare_cols)] for port in self.write_ports}
|
||||
|
|
@ -174,10 +174,10 @@ class simulation():
|
|||
|
||||
def add_address(self, address, port):
|
||||
""" Add the array of address values """
|
||||
debug.check(len(address)==self.addr_size, "Invalid address size.")
|
||||
debug.check(len(address)==self.bank_addr_size, "Invalid address size.")
|
||||
|
||||
self.addr_value[port].append(address)
|
||||
bit = self.addr_size - 1
|
||||
bit = self.bank_addr_size - 1
|
||||
for c in address:
|
||||
if c=="0":
|
||||
self.addr_values[port][bit].append(0)
|
||||
|
|
@ -330,7 +330,7 @@ class simulation():
|
|||
try:
|
||||
self.add_address(self.addr_value[port][-1], port)
|
||||
except:
|
||||
self.add_address("0" * self.addr_size, port)
|
||||
self.add_address("0" * self.bank_addr_size, port)
|
||||
|
||||
# If the port is also a readwrite then add
|
||||
# the same value as previous cycle
|
||||
|
|
@ -464,7 +464,7 @@ class simulation():
|
|||
for port in range(total_ports):
|
||||
pin_names.append("{0}{1}".format("clk", port))
|
||||
|
||||
if self.write_size:
|
||||
if self.write_size != self.word_size:
|
||||
for port in write_index:
|
||||
for bit in range(self.num_wmasks):
|
||||
pin_names.append("WMASK{0}_{1}".format(port, bit))
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
@ -11,12 +11,12 @@ various functions that can be be used to generate stimulus for other
|
|||
simulations as well.
|
||||
"""
|
||||
|
||||
import tech
|
||||
import debug
|
||||
import subprocess
|
||||
import os
|
||||
import subprocess
|
||||
import numpy as np
|
||||
from globals import OPTS
|
||||
from openram import debug
|
||||
from openram import tech
|
||||
from openram import OPTS
|
||||
|
||||
|
||||
class stimuli():
|
||||
|
|
@ -405,6 +405,11 @@ class stimuli():
|
|||
spice_stdout = open("{0}spice_stdout.log".format(OPTS.openram_temp), 'w')
|
||||
spice_stderr = open("{0}spice_stderr.log".format(OPTS.openram_temp), 'w')
|
||||
|
||||
# Wrap the command with conda activate & conda deactivate
|
||||
# FIXME: Should use verify/run_script.py here but run_script doesn't return
|
||||
# the return code of the subprocess. File names might also mismatch.
|
||||
from openram import CONDA_HOME
|
||||
cmd = "source {0}/bin/activate && {1} && conda deactivate".format(CONDA_HOME, cmd)
|
||||
debug.info(2, cmd)
|
||||
retcode = subprocess.call(cmd, stdout=spice_stdout, stderr=spice_stderr, shell=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from math import log,ceil
|
||||
import re
|
||||
from math import log, ceil
|
||||
from openram import debug
|
||||
|
||||
|
||||
class trim_spice():
|
||||
|
|
@ -46,9 +46,9 @@ class trim_spice():
|
|||
self.col_addr_size = int(log(self.words_per_row, 2))
|
||||
self.bank_addr_size = self.col_addr_size + self.row_addr_size
|
||||
self.addr_size = self.bank_addr_size + int(log(self.num_banks, 2))
|
||||
|
||||
|
||||
def trim(self, address, data_bit):
|
||||
"""
|
||||
"""
|
||||
Reduce the spice netlist but KEEP the given bits at the
|
||||
address (and things that will add capacitive load!)
|
||||
"""
|
||||
|
|
@ -62,7 +62,7 @@ class trim_spice():
|
|||
col_address = int(address[0:self.col_addr_size], 2)
|
||||
else:
|
||||
col_address = 0
|
||||
|
||||
|
||||
# 1. Keep cells in the bitcell array based on WL and BL
|
||||
wl_name = "wl_{}".format(wl_address)
|
||||
bl_name = "bl_{}".format(int(self.words_per_row*data_bit + col_address))
|
||||
|
|
|
|||
|
|
@ -1 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .datasheet_gen import datasheet_gen
|
||||
|
|
|
|||
|
|
@ -1,14 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
from pathlib import Path
|
||||
import glob
|
||||
import os
|
||||
import sys
|
||||
import os
|
||||
import glob
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# This is the path to the directory you would like to search
|
||||
# This directory is searched recursively for .html files
|
||||
|
|
|
|||
|
|
@ -1,14 +1,14 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
from .table_gen import *
|
||||
import os
|
||||
import base64
|
||||
from globals import OPTS
|
||||
from openram import OPTS
|
||||
from .table_gen import *
|
||||
|
||||
|
||||
class datasheet():
|
||||
|
|
@ -31,7 +31,7 @@ class datasheet():
|
|||
if OPTS.output_datasheet_info:
|
||||
datasheet_path = OPTS.output_path
|
||||
else:
|
||||
datasheet_path = OPTS.openram_temp
|
||||
datasheet_path = OPTS.openram_temp
|
||||
with open(datasheet_path + "/datasheet.info") as info:
|
||||
self.html += '<!--'
|
||||
for row in info:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
@ -15,10 +15,10 @@ a web friendly html datasheet.
|
|||
# Improve css
|
||||
|
||||
|
||||
from globals import OPTS
|
||||
import os
|
||||
import math
|
||||
import csv
|
||||
from openram import OPTS
|
||||
from .datasheet import datasheet
|
||||
from .table_gen import table_gen
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
|
||||
class table_gen:
|
||||
"""small library of functions to generate the html tables"""
|
||||
|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import os
|
||||
import inspect
|
||||
import globals
|
||||
import sys
|
||||
import os
|
||||
import pdb
|
||||
import inspect
|
||||
from openram import globals
|
||||
|
||||
# the debug levels:
|
||||
# 0 = minimum output (default)
|
||||
|
|
@ -29,7 +29,7 @@ def check(check, str):
|
|||
|
||||
if globals.OPTS.debug:
|
||||
pdb.set_trace()
|
||||
|
||||
|
||||
assert 0
|
||||
|
||||
|
||||
|
|
@ -96,7 +96,11 @@ log.create_file = True
|
|||
|
||||
|
||||
def info(lev, str):
|
||||
from globals import OPTS
|
||||
from openram.globals import OPTS
|
||||
# 99 is a special never print level
|
||||
if lev == 99:
|
||||
return
|
||||
|
||||
if (OPTS.verbose_level >= lev):
|
||||
frm = inspect.stack()[1]
|
||||
mod = inspect.getmodule(frm[0])
|
||||
|
|
@ -108,9 +112,9 @@ def info(lev, str):
|
|||
print_raw("[{0}/{1}]: {2}".format(class_name,
|
||||
frm[0].f_code.co_name, str))
|
||||
|
||||
|
||||
|
||||
def archive():
|
||||
from globals import OPTS
|
||||
from openram.globals import OPTS
|
||||
try:
|
||||
OPENRAM_HOME = os.path.abspath(os.environ.get("OPENRAM_HOME"))
|
||||
except:
|
||||
|
|
@ -121,17 +125,16 @@ def archive():
|
|||
info(0, "Archiving failed files to {}.zip".format(zip_file))
|
||||
shutil.make_archive(zip_file, 'zip', OPTS.openram_temp)
|
||||
|
||||
|
||||
|
||||
def bp():
|
||||
"""
|
||||
An empty function so you can set soft breakpoints in pdb.
|
||||
Usage:
|
||||
1) Add a breakpoint anywhere in your code with "import debug; debug.bp()".
|
||||
2) Run "python3 -m pdb openram.py config.py" or "python3 -m pdb 05_bitcell_array.test" (for example)
|
||||
2) Run "python3 -m pdb sram_compiler.py config.py" or "python3 -m pdb 05_bitcell_array.test" (for example)
|
||||
3) When pdb starts, run "break debug.bp" to set a SOFT breakpoint. (Or you can add this to your ~/.pdbrc)
|
||||
4) Then run "cont" to continue.
|
||||
5) You can now set additional breakpoints or display commands
|
||||
5) You can now set additional breakpoints or display commands
|
||||
and whenever you encounter the debug.bp() they won't be "reset".
|
||||
"""
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,8 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .custom_cell_properties import *
|
||||
from .custom_layer_properties import *
|
||||
from .design_rules import *
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2020 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
@ -11,14 +11,14 @@ class cell:
|
|||
|
||||
# Some cells may have body bias (well taps) exposed as ports
|
||||
self._body_bias = body_bias
|
||||
|
||||
|
||||
# Specifies if this is a hard (i.e. GDS) cell
|
||||
self._hard_cell = hard_cell
|
||||
self._boundary_layer = boundary_layer
|
||||
|
||||
# Specifies the port directions
|
||||
self._port_types_map = {x: y for (x, y) in zip(port_order, port_types)}
|
||||
|
||||
|
||||
# Specifies a map from OpenRAM names to cell names
|
||||
# by default it is 1:1
|
||||
if not port_map:
|
||||
|
|
@ -31,13 +31,13 @@ class cell:
|
|||
|
||||
# Create an index array
|
||||
self._port_indices = [self._port_order.index(x) for x in self._original_port_order]
|
||||
|
||||
|
||||
# Update ordered name list
|
||||
self._port_names = [self._port_map[x] for x in self._port_order]
|
||||
|
||||
|
||||
# Update ordered type list
|
||||
self._port_types = [self._port_types_map[x] for x in self._port_order]
|
||||
|
||||
|
||||
@property
|
||||
def hard_cell(self):
|
||||
return self._hard_cell
|
||||
|
|
@ -73,21 +73,21 @@ class cell:
|
|||
@property
|
||||
def port_indices(self):
|
||||
return self._port_indices
|
||||
|
||||
|
||||
@property
|
||||
def port_map(self):
|
||||
return self._port_map
|
||||
|
||||
|
||||
@port_map.setter
|
||||
def port_map(self, port_map):
|
||||
self._port_map = port_map
|
||||
# Update ordered name list to use the new names
|
||||
self._port_names = [self._port_map[x] for x in self._port_order]
|
||||
|
||||
|
||||
@property
|
||||
def body_bias(self):
|
||||
return self._body_bias
|
||||
|
||||
|
||||
@body_bias.setter
|
||||
def body_bias(self, body_bias):
|
||||
# It is assumed it is [nwell, pwell]
|
||||
|
|
@ -96,7 +96,7 @@ class cell:
|
|||
self._port_types['vnb'] = "GROUND"
|
||||
self._port_map['vpb'] = body_bias[1]
|
||||
self._port_types['vpb'] = "POWER"
|
||||
|
||||
|
||||
@property
|
||||
def port_types(self):
|
||||
return self._port_types
|
||||
|
|
@ -108,7 +108,7 @@ class cell:
|
|||
self._port_types_map = {x: y for (x, y) in zip(self._port_order, self._port_types)}
|
||||
# Update ordered type list
|
||||
self._port_types = [self._port_types_map[x] for x in self._port_order]
|
||||
|
||||
|
||||
@property
|
||||
def boundary_layer(self):
|
||||
return self._boundary_layer
|
||||
|
|
@ -116,8 +116,8 @@ class cell:
|
|||
@boundary_layer.setter
|
||||
def boundary_layer(self, x):
|
||||
self._boundary_layer = x
|
||||
|
||||
|
||||
|
||||
|
||||
class _pins:
|
||||
def __init__(self, pin_dict):
|
||||
# make the pins elements of the class to allow "." access.
|
||||
|
|
@ -148,7 +148,7 @@ class bitcell(cell):
|
|||
super().__init__(port_order, port_types, port_map)
|
||||
|
||||
self.end_caps = end_caps
|
||||
|
||||
|
||||
if not mirror:
|
||||
self.mirror = _mirror_axis(True, False)
|
||||
else:
|
||||
|
|
@ -166,7 +166,7 @@ class bitcell(cell):
|
|||
self.gnd_layer = "m1"
|
||||
self.gnd_dir = "H"
|
||||
|
||||
|
||||
|
||||
class cell_properties():
|
||||
"""
|
||||
This contains meta information about the custom designed cells. For
|
||||
|
|
@ -186,24 +186,27 @@ class cell_properties():
|
|||
self.names["col_cap_bitcell_2port"] = "col_cap_cell_2rw"
|
||||
self.names["row_cap_bitcell_1port"] = "row_cap_cell_1rw"
|
||||
self.names["row_cap_bitcell_2port"] = "row_cap_cell_2rw"
|
||||
self.names["internal"] = "internal"
|
||||
|
||||
self.use_strap = False
|
||||
self._ptx = _ptx(model_is_subckt=False,
|
||||
bin_spice_models=False)
|
||||
|
||||
self._pgate = _pgate(add_implants=False)
|
||||
|
||||
|
||||
self._inv_dec = cell(["A", "Z", "vdd", "gnd"],
|
||||
["INPUT", "OUTPUT", "POWER", "GROUND"])
|
||||
|
||||
|
||||
self._nand2_dec = cell(["A", "B", "Z", "vdd", "gnd"],
|
||||
["INPUT", "INPUT", "OUTPUT", "POWER", "GROUND"])
|
||||
|
||||
|
||||
self._nand3_dec = cell(["A", "B", "C", "Z", "vdd", "gnd"],
|
||||
["INPUT", "INPUT", "INPUT", "OUTPUT", "POWER", "GROUND"])
|
||||
|
||||
|
||||
self._nand4_dec = cell(["A", "B", "C", "D", "Z", "vdd", "gnd"],
|
||||
["INPUT", "INPUT", "INPUT", "INPUT", "OUTPUT", "POWER", "GROUND"])
|
||||
|
||||
|
||||
self._dff = cell(["D", "Q", "clk", "vdd", "gnd"],
|
||||
["INPUT", "OUTPUT", "INPUT", "POWER", "GROUND"])
|
||||
|
||||
|
|
@ -230,7 +233,13 @@ class cell_properties():
|
|||
|
||||
self._row_cap_2port = bitcell(["wl0", "wl1", "gnd"],
|
||||
["INPUT", "INPUT", "POWER", "GROUND"])
|
||||
|
||||
|
||||
self._internal = cell([],[])
|
||||
|
||||
@property
|
||||
def internal(self):
|
||||
return self._internal
|
||||
|
||||
@property
|
||||
def ptx(self):
|
||||
return self._ptx
|
||||
|
|
@ -246,15 +255,15 @@ class cell_properties():
|
|||
@property
|
||||
def nand2_dec(self):
|
||||
return self._nand2_dec
|
||||
|
||||
|
||||
@property
|
||||
def nand3_dec(self):
|
||||
return self._nand3_dec
|
||||
|
||||
|
||||
@property
|
||||
def nand4_dec(self):
|
||||
return self._nand4_dec
|
||||
|
||||
|
||||
@property
|
||||
def dff(self):
|
||||
return self._dff
|
||||
|
|
@ -270,7 +279,7 @@ class cell_properties():
|
|||
@property
|
||||
def bitcell_1port(self):
|
||||
return self._bitcell_1port
|
||||
|
||||
|
||||
@property
|
||||
def bitcell_2port(self):
|
||||
return self._bitcell_2port
|
||||
|
|
@ -282,7 +291,7 @@ class cell_properties():
|
|||
@property
|
||||
def row_cap_1port(self):
|
||||
return self._row_cap_1port
|
||||
|
||||
|
||||
@property
|
||||
def col_cap_2port(self):
|
||||
return self._col_cap_2port
|
||||
|
|
@ -290,4 +299,3 @@ class cell_properties():
|
|||
@property
|
||||
def row_cap_2port(self):
|
||||
return self._row_cap_2port
|
||||
|
||||
|
|
|
|||
|
|
@ -1,12 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2020 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
|
||||
class _bank:
|
||||
def __init__(self, stack, pitch):
|
||||
# bank
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from openram import debug
|
||||
from .drc_value import *
|
||||
from .drc_lut import *
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
import debug
|
||||
from openram import debug
|
||||
|
||||
|
||||
class drc_lut():
|
||||
|
|
|
|||
|
|
@ -1,12 +1,11 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
#
|
||||
|
||||
|
||||
class drc_value():
|
||||
"""
|
||||
A single DRC value.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import pyx
|
||||
import math
|
||||
from numpy import matrix
|
||||
from gdsPrimitives import *
|
||||
import random
|
||||
from numpy import matrix
|
||||
from openram.gdsMill import pyx
|
||||
from .gdsPrimitives import *
|
||||
|
||||
class pdfLayout:
|
||||
"""Class representing a view for a layout as a PDF"""
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
from .gdsPrimitives import *
|
||||
import math
|
||||
from datetime import *
|
||||
import numpy as np
|
||||
import math
|
||||
import debug
|
||||
from openram import debug
|
||||
from .gdsPrimitives import *
|
||||
|
||||
|
||||
class VlsiLayout:
|
||||
|
|
@ -774,7 +774,7 @@ class VlsiLayout:
|
|||
else:
|
||||
label_text = label.textString
|
||||
try:
|
||||
from tech import layer_override
|
||||
from openram.tech import layer_override
|
||||
if layer_override[label_text]:
|
||||
shapes = self.getAllShapes((layer_override[label_text][0], None))
|
||||
if not shapes:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
#!/usr/bin/env python
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2021 Regents of the University of California and The Board
|
||||
# Copyright (c) 2016-2023 Regents of the University of California and The Board
|
||||
# of Regents for the Oklahoma Agricultural and Mechanical College
|
||||
# (acting for and on behalf of Oklahoma State University)
|
||||
# All rights reserved.
|
||||
|
|
@ -9,48 +9,45 @@
|
|||
This is called globals.py, but it actually parses all the arguments
|
||||
and performs the global OpenRAM setup as well.
|
||||
"""
|
||||
import sys
|
||||
import os
|
||||
import debug
|
||||
import re
|
||||
import shutil
|
||||
import optparse
|
||||
import options
|
||||
import sys
|
||||
import re
|
||||
import copy
|
||||
import importlib
|
||||
import getpass
|
||||
import subprocess
|
||||
from openram import debug
|
||||
from openram import options
|
||||
|
||||
|
||||
VERSION = "1.1.18"
|
||||
from openram import OPENRAM_HOME
|
||||
VERSION = open(OPENRAM_HOME + "/../VERSION").read().rstrip()
|
||||
NAME = "OpenRAM v{}".format(VERSION)
|
||||
USAGE = "openram.py [options] <config file>\nUse -h for help.\n"
|
||||
USAGE = "sram_compiler.py [options] <config file>\nUse -h for help.\n"
|
||||
|
||||
OPTS = options.options()
|
||||
CHECKPOINT_OPTS = None
|
||||
|
||||
|
||||
def parse_args():
|
||||
""" Parse the optional arguments for OpenRAM """
|
||||
""" Parse the optional arguments for OpenRAM. """
|
||||
|
||||
global OPTS
|
||||
|
||||
option_list = {
|
||||
optparse.make_option("-b",
|
||||
"--backannotated",
|
||||
optparse.make_option("-b", "--backannotated",
|
||||
action="store_true",
|
||||
dest="use_pex",
|
||||
help="Back annotate simulation"),
|
||||
optparse.make_option("-o",
|
||||
"--output",
|
||||
optparse.make_option("-o", "--output",
|
||||
dest="output_name",
|
||||
help="Base output file name(s) prefix",
|
||||
metavar="FILE"),
|
||||
optparse.make_option("-p", "--outpath",
|
||||
dest="output_path",
|
||||
help="Output file(s) location"),
|
||||
optparse.make_option("-i",
|
||||
"--inlinecheck",
|
||||
optparse.make_option("-i", "--inlinecheck",
|
||||
action="store_true",
|
||||
help="Enable inline LVS/DRC checks",
|
||||
dest="inline_lvsdrc"),
|
||||
|
|
@ -68,36 +65,29 @@ def parse_args():
|
|||
type="int",
|
||||
help="Specify the number of spice simulation threads (default: 3)",
|
||||
dest="num_sim_threads"),
|
||||
optparse.make_option("-v",
|
||||
"--verbose",
|
||||
optparse.make_option("-v", "--verbose",
|
||||
action="count",
|
||||
dest="verbose_level",
|
||||
help="Increase the verbosity level"),
|
||||
optparse.make_option("-t",
|
||||
"--tech",
|
||||
optparse.make_option("-t", "--tech",
|
||||
dest="tech_name",
|
||||
help="Technology name"),
|
||||
optparse.make_option("-s",
|
||||
"--spice",
|
||||
optparse.make_option("-s", "--spice",
|
||||
dest="spice_name",
|
||||
help="Spice simulator executable name"),
|
||||
optparse.make_option("-r",
|
||||
"--remove_netlist_trimming",
|
||||
optparse.make_option("-r", "--remove_netlist_trimming",
|
||||
action="store_false",
|
||||
dest="trim_netlist",
|
||||
help="Disable removal of noncritical memory cells during characterization"),
|
||||
optparse.make_option("-c",
|
||||
"--characterize",
|
||||
optparse.make_option("-c", "--characterize",
|
||||
action="store_false",
|
||||
dest="analytical_delay",
|
||||
help="Perform characterization to calculate delays (default is analytical models)"),
|
||||
optparse.make_option("-k",
|
||||
"--keeptemp",
|
||||
optparse.make_option("-k", "--keeptemp",
|
||||
action="store_true",
|
||||
dest="keep_temp",
|
||||
help="Keep the contents of the temp directory after a successful run"),
|
||||
optparse.make_option("-d",
|
||||
"--debug",
|
||||
optparse.make_option("-d", "--debug",
|
||||
action="store_true",
|
||||
dest="debug",
|
||||
help="Run in debug mode to drop to pdb on failure")
|
||||
|
|
@ -125,7 +115,7 @@ def parse_args():
|
|||
|
||||
|
||||
def print_banner():
|
||||
""" Conditionally print the banner to stdout """
|
||||
""" Conditionally print the banner to stdout. """
|
||||
global OPTS
|
||||
if OPTS.is_unit_test:
|
||||
return
|
||||
|
|
@ -141,9 +131,6 @@ def print_banner():
|
|||
debug.print_raw("|=========" + user_info.center(60) + "=========|")
|
||||
dev_info = "Development help: openram-dev-group@ucsc.edu"
|
||||
debug.print_raw("|=========" + dev_info.center(60) + "=========|")
|
||||
if OPTS.openram_temp:
|
||||
temp_info = "Temp dir: {}".format(OPTS.openram_temp)
|
||||
debug.print_raw("|=========" + temp_info.center(60) + "=========|")
|
||||
debug.print_raw("|=========" + "See LICENSE for license info".center(60) + "=========|")
|
||||
debug.print_raw("|==============================================================================|")
|
||||
|
||||
|
|
@ -163,8 +150,7 @@ def check_versions():
|
|||
try:
|
||||
subprocess.check_output(["git", "--version"])
|
||||
except:
|
||||
debug.error("Git is required. Please install git.")
|
||||
sys.exit(1)
|
||||
debug.error("Git is required. Please install git.", -1)
|
||||
|
||||
# FIXME: Check versions of other tools here??
|
||||
# or, this could be done in each module (e.g. verify, characterizer, etc.)
|
||||
|
|
@ -180,7 +166,7 @@ def check_versions():
|
|||
else:
|
||||
OPTS.coverage_exe = ""
|
||||
debug.warning("Failed to find coverage installation. This can be installed with pip3 install coverage")
|
||||
|
||||
|
||||
try:
|
||||
import coverage
|
||||
OPTS.coverage = 1
|
||||
|
|
@ -188,7 +174,7 @@ def check_versions():
|
|||
OPTS.coverage = 0
|
||||
|
||||
|
||||
def init_openram(config_file, is_unit_test=True):
|
||||
def init_openram(config_file, is_unit_test=False):
|
||||
""" Initialize the technology, paths, simulators, etc. """
|
||||
|
||||
check_versions()
|
||||
|
|
@ -199,38 +185,38 @@ def init_openram(config_file, is_unit_test=True):
|
|||
|
||||
read_config(config_file, is_unit_test)
|
||||
|
||||
install_conda()
|
||||
|
||||
import_tech()
|
||||
|
||||
set_default_corner()
|
||||
|
||||
init_paths()
|
||||
|
||||
from sram_factory import factory
|
||||
from openram.sram_factory import factory
|
||||
factory.reset()
|
||||
|
||||
global OPTS
|
||||
global CHECKPOINT_OPTS
|
||||
|
||||
# This is a hack. If we are running a unit test and have checkpointed
|
||||
# the options, load them rather than reading the config file.
|
||||
# This way, the configuration is reloaded at the start of every unit test.
|
||||
# If a unit test fails,
|
||||
# we don't have to worry about restoring the old config values
|
||||
# that may have been tested.
|
||||
if is_unit_test and CHECKPOINT_OPTS:
|
||||
OPTS.__dict__ = CHECKPOINT_OPTS.__dict__.copy()
|
||||
return
|
||||
|
||||
# Setup correct bitcell names
|
||||
setup_bitcell()
|
||||
|
||||
# Import these to find the executables for checkpointing
|
||||
import characterizer
|
||||
import verify
|
||||
# Make a checkpoint of the options so we can restore
|
||||
# after each unit test
|
||||
if not CHECKPOINT_OPTS:
|
||||
CHECKPOINT_OPTS = copy.copy(OPTS)
|
||||
from openram import characterizer
|
||||
from openram import verify
|
||||
|
||||
|
||||
def install_conda():
|
||||
""" Setup conda for default tools. """
|
||||
|
||||
# Don't setup conda if not used
|
||||
if not OPTS.use_conda or OPTS.is_unit_test:
|
||||
return
|
||||
|
||||
debug.info(1, "Creating conda setup...");
|
||||
|
||||
from openram import CONDA_INSTALLER
|
||||
subprocess.call(CONDA_INSTALLER)
|
||||
|
||||
|
||||
def setup_bitcell():
|
||||
|
|
@ -249,10 +235,10 @@ def setup_bitcell():
|
|||
OPTS.bitcell = "bitcell_{}port".format(OPTS.num_ports)
|
||||
OPTS.dummy_bitcell = "dummy_" + OPTS.bitcell
|
||||
OPTS.replica_bitcell = "replica_" + OPTS.bitcell
|
||||
|
||||
|
||||
# See if bitcell exists
|
||||
try:
|
||||
c = importlib.import_module("modules." + OPTS.bitcell)
|
||||
c = importlib.import_module("openram.modules." + OPTS.bitcell)
|
||||
mod = getattr(c, OPTS.bitcell)
|
||||
except ImportError:
|
||||
# Use the pbitcell if we couldn't find a custom bitcell
|
||||
|
|
@ -283,21 +269,20 @@ def get_tool(tool_type, preferences, default_name=None):
|
|||
2)
|
||||
else:
|
||||
debug.info(1, "Using {0}: {1}".format(tool_type, exe_name))
|
||||
return(default_name, exe_name)
|
||||
return (default_name, exe_name)
|
||||
else:
|
||||
for name in preferences:
|
||||
exe_name = find_exe(name)
|
||||
if exe_name != None:
|
||||
debug.info(1, "Using {0}: {1}".format(tool_type, exe_name))
|
||||
return(name, exe_name)
|
||||
return (name, exe_name)
|
||||
else:
|
||||
debug.info(1,
|
||||
"Could not find {0}, trying next {1} tool.".format(name, tool_type))
|
||||
debug.info(1, "Could not find {0}, trying next {1} tool.".format(name, tool_type))
|
||||
else:
|
||||
return(None, "")
|
||||
return (None, "")
|
||||
|
||||
|
||||
def read_config(config_file, is_unit_test=True):
|
||||
def read_config(config_file, is_unit_test=False):
|
||||
"""
|
||||
Read the configuration file that defines a few parameters. The
|
||||
config file is just a Python file that defines some config
|
||||
|
|
@ -378,21 +363,25 @@ def read_config(config_file, is_unit_test=True):
|
|||
ports,
|
||||
OPTS.tech_name)
|
||||
|
||||
# If write size is not defined, set it equal to word size
|
||||
if OPTS.write_size is None:
|
||||
OPTS.write_size = OPTS.word_size
|
||||
|
||||
|
||||
def end_openram():
|
||||
""" Clean up openram for a proper exit """
|
||||
""" Clean up openram for a proper exit. """
|
||||
cleanup_paths()
|
||||
|
||||
if OPTS.check_lvsdrc:
|
||||
import verify
|
||||
from openram import verify
|
||||
verify.print_drc_stats()
|
||||
verify.print_lvs_stats()
|
||||
verify.print_pex_stats()
|
||||
|
||||
|
||||
|
||||
def purge_temp():
|
||||
""" Remove the temp directory. """
|
||||
debug.info(1,
|
||||
"Purging temp directory: {}".format(OPTS.openram_temp))
|
||||
debug.info(1, "Purging temp directory: {}".format(OPTS.openram_temp))
|
||||
#import inspect
|
||||
#s = inspect.stack()
|
||||
#print("Purge {0} in dir {1}".format(s[3].filename, OPTS.openram_temp))
|
||||
|
|
@ -406,7 +395,7 @@ def purge_temp():
|
|||
os.remove(i)
|
||||
else:
|
||||
shutil.rmtree(i)
|
||||
|
||||
|
||||
|
||||
def cleanup_paths():
|
||||
"""
|
||||
|
|
@ -414,57 +403,57 @@ def cleanup_paths():
|
|||
"""
|
||||
global OPTS
|
||||
if OPTS.keep_temp:
|
||||
debug.info(0,
|
||||
"Preserving temp directory: {}".format(OPTS.openram_temp))
|
||||
debug.info(0, "Preserving temp directory: {}".format(OPTS.openram_temp))
|
||||
return
|
||||
elif os.path.exists(OPTS.openram_temp):
|
||||
purge_temp()
|
||||
|
||||
|
||||
|
||||
def setup_paths():
|
||||
""" Set up the non-tech related paths. """
|
||||
debug.info(2, "Setting up paths...")
|
||||
|
||||
global OPTS
|
||||
|
||||
try:
|
||||
OPENRAM_HOME = os.path.abspath(os.environ.get("OPENRAM_HOME"))
|
||||
except:
|
||||
debug.error("$OPENRAM_HOME is not properly defined.", 1)
|
||||
|
||||
debug.check(os.path.isdir(OPENRAM_HOME),
|
||||
"$OPENRAM_HOME does not exist: {0}".format(OPENRAM_HOME))
|
||||
|
||||
if OPENRAM_HOME not in sys.path:
|
||||
debug.error("Please add OPENRAM_HOME to the PYTHONPATH.", -1)
|
||||
from openram import OPENRAM_HOME
|
||||
debug.info(1, "OpenRAM source code found in {}".format(OPENRAM_HOME))
|
||||
|
||||
# Use a unique temp subdirectory if multithreaded
|
||||
if OPTS.num_threads > 1 or OPTS.openram_temp == "/tmp":
|
||||
|
||||
# Make a unique subdir
|
||||
tempdir = "/openram_{0}_{1}_temp".format(getpass.getuser(),
|
||||
os.getpid())
|
||||
# Only add the unique subdir one time
|
||||
if tempdir not in OPTS.openram_temp:
|
||||
OPTS.openram_temp += tempdir
|
||||
|
||||
|
||||
if not OPTS.openram_temp.endswith('/'):
|
||||
OPTS.openram_temp += "/"
|
||||
debug.info(1, "Temporary files saved in " + OPTS.openram_temp)
|
||||
|
||||
|
||||
|
||||
def is_exe(fpath):
|
||||
""" Return true if the given is an executable file that exists. """
|
||||
|
||||
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
|
||||
|
||||
|
||||
def find_exe(check_exe):
|
||||
"""
|
||||
Check if the binary exists in any path dir
|
||||
and return the full path.
|
||||
Check if the binary exists in any path dir and return the full path.
|
||||
"""
|
||||
|
||||
# Search for conda setup if used
|
||||
if OPTS.use_conda:
|
||||
from openram import CONDA_HOME
|
||||
search_path = "{0}/bin{1}{2}".format(CONDA_HOME,
|
||||
os.pathsep,
|
||||
os.environ["PATH"])
|
||||
else:
|
||||
search_path = os.environ["PATH"]
|
||||
|
||||
# Check if the preferred spice option exists in the path
|
||||
for path in os.environ["PATH"].split(os.pathsep):
|
||||
for path in search_path.split(os.pathsep):
|
||||
exe = os.path.join(path, check_exe)
|
||||
# if it is found, then break and use first version
|
||||
if is_exe(exe):
|
||||
|
|
@ -473,40 +462,42 @@ def find_exe(check_exe):
|
|||
|
||||
|
||||
def init_paths():
|
||||
""" Create the temp and output directory if it doesn't exist """
|
||||
""" Create the temp and output directory if it doesn't exist. """
|
||||
|
||||
if os.path.exists(OPTS.openram_temp):
|
||||
purge_temp()
|
||||
else:
|
||||
# make the directory if it doesn't exist
|
||||
# Make the directory if it doesn't exist
|
||||
try:
|
||||
debug.info(1,
|
||||
"Creating temp directory: {}".format(OPTS.openram_temp))
|
||||
debug.info(1, "Creating temp directory: {}".format(OPTS.openram_temp))
|
||||
os.makedirs(OPTS.openram_temp, 0o750)
|
||||
except OSError as e:
|
||||
if e.errno == 17: # errno.EEXIST
|
||||
if e.errno == 17: # errno.EEXIST
|
||||
os.chmod(OPTS.openram_temp, 0o750)
|
||||
else:
|
||||
debug.error("Unable to make temp directory: {}".format(OPTS.openram_temp), -1)
|
||||
#import inspect
|
||||
#s = inspect.stack()
|
||||
#from pprint import pprint
|
||||
#pprint(s)
|
||||
#print("Test {0} in dir {1}".format(s[2].filename, OPTS.openram_temp))
|
||||
|
||||
|
||||
|
||||
# Don't delete the output dir, it may have other files!
|
||||
# make the directory if it doesn't exist
|
||||
try:
|
||||
os.makedirs(OPTS.output_path, 0o750)
|
||||
except OSError as e:
|
||||
if e.errno == 17: # errno.EEXIST
|
||||
if e.errno == 17: # errno.EEXIST
|
||||
os.chmod(OPTS.output_path, 0o750)
|
||||
except:
|
||||
debug.error("Unable to make output directory.", -1)
|
||||
else:
|
||||
debug.error("Unable to make output directory: {}".format(OPTS.output_path), -1)
|
||||
|
||||
|
||||
def set_default_corner():
|
||||
""" Set the default corner. """
|
||||
|
||||
import tech
|
||||
from openram import tech
|
||||
# Set some default options now based on the technology...
|
||||
if (OPTS.process_corners == ""):
|
||||
if OPTS.nominal_corner_only:
|
||||
|
|
@ -539,19 +530,38 @@ def import_tech():
|
|||
""" Dynamically adds the tech directory to the path and imports it. """
|
||||
global OPTS
|
||||
|
||||
debug.info(2,
|
||||
"Importing technology: " + OPTS.tech_name)
|
||||
debug.info(2, "Importing technology: " + OPTS.tech_name)
|
||||
|
||||
# environment variable should point to the technology dir
|
||||
OPENRAM_TECH = ""
|
||||
|
||||
# Check if $OPENRAM_TECH is defined
|
||||
try:
|
||||
OPENRAM_TECH = os.path.abspath(os.environ.get("OPENRAM_TECH"))
|
||||
except:
|
||||
debug.error("$OPENRAM_TECH environment variable is not defined.", 1)
|
||||
debug.info(2,
|
||||
"$OPENRAM_TECH environment variable is not defined. "
|
||||
"Only the default technology modules will be considered if installed.")
|
||||
# Point to the default technology modules that are part of the openram package
|
||||
try:
|
||||
import openram
|
||||
if OPENRAM_TECH != "":
|
||||
OPENRAM_TECH += ":"
|
||||
OPENRAM_TECH += os.path.dirname(openram.__file__) + "/technology"
|
||||
except:
|
||||
if OPENRAM_TECH == "":
|
||||
debug.warning("Couldn't find a tech directory. "
|
||||
"Install openram library or set $OPENRAM_TECH.")
|
||||
|
||||
debug.info(1, "Tech directory found in {}".format(OPENRAM_TECH))
|
||||
|
||||
# Add this environment variable to os.environ and openram namespace
|
||||
os.environ["OPENRAM_TECH"] = OPENRAM_TECH
|
||||
openram.OPENRAM_TECH = OPENRAM_TECH
|
||||
|
||||
# Add all of the paths
|
||||
for tech_path in OPENRAM_TECH.split(":"):
|
||||
debug.check(os.path.isdir(tech_path),
|
||||
"$OPENRAM_TECH does not exist: {0}".format(tech_path))
|
||||
"$OPENRAM_TECH does not exist: {}".format(tech_path))
|
||||
sys.path.append(tech_path)
|
||||
debug.info(1, "Adding technology path: {}".format(tech_path))
|
||||
|
||||
|
|
@ -559,22 +569,27 @@ def import_tech():
|
|||
try:
|
||||
tech_mod = __import__(OPTS.tech_name)
|
||||
except ImportError:
|
||||
debug.error("Nonexistent technology module: {0}".format(OPTS.tech_name), -1)
|
||||
debug.error("Nonexistent technology module: {}".format(OPTS.tech_name), -1)
|
||||
|
||||
OPTS.openram_tech = os.path.dirname(tech_mod.__file__) + "/"
|
||||
|
||||
# Prepend the tech directory so it is sourced FIRST
|
||||
# Append tech_path to openram.__path__ to import it from openram
|
||||
tech_path = OPTS.openram_tech
|
||||
sys.path.insert(0, tech_path)
|
||||
openram.__path__.append(tech_path)
|
||||
try:
|
||||
import tech
|
||||
from openram import tech
|
||||
except ImportError:
|
||||
debug.error("Could not load tech module.", -1)
|
||||
|
||||
# Prepend custom modules of the technology to the path, if they exist
|
||||
custom_mod_path = os.path.join(tech_path, "modules/")
|
||||
# Remove OPENRAM_TECH from sys.path because we should be done with those
|
||||
for tech_path in OPENRAM_TECH.split(":"):
|
||||
sys.path.remove(tech_path)
|
||||
|
||||
# Add the custom modules to "tech"
|
||||
custom_mod_path = os.path.join(tech_path, "custom/")
|
||||
if os.path.exists(custom_mod_path):
|
||||
sys.path.insert(0, custom_mod_path)
|
||||
from openram import tech
|
||||
tech.__path__.append(custom_mod_path)
|
||||
|
||||
|
||||
def print_time(name, now_time, last_time=None, indentation=2):
|
||||
|
|
@ -623,7 +638,7 @@ def report_status():
|
|||
total_size = OPTS.word_size*OPTS.num_words*OPTS.num_banks
|
||||
debug.print_raw("Total size: {} bits".format(total_size))
|
||||
if total_size >= 2**14 and not OPTS.analytical_delay:
|
||||
debug.warning("Characterizing large memories ({0}) will have a large run-time. ".format(total_size))
|
||||
debug.warning("Characterizing large memories ({0}) will have a large run-time.".format(total_size))
|
||||
debug.print_raw("Word size: {0}\nWords: {1}\nBanks: {2}".format(OPTS.word_size,
|
||||
OPTS.num_words,
|
||||
OPTS.num_banks))
|
||||
|
|
|
|||
|
|
@ -1 +1,6 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
model_name = "cacti"
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 10
|
||||
num_words = 64
|
||||
words_per_row = 4
|
||||
|
|
|
|||
|
|
@ -1,8 +1,13 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 128
|
||||
num_words = 1024
|
||||
|
||||
output_extended_config = True
|
||||
output_datasheet_info = True
|
||||
netlist_only = True
|
||||
nominal_corner_only = True
|
||||
nominal_corner_only = True
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 12
|
||||
num_words = 128
|
||||
words_per_row = 4
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 12
|
||||
num_words = 16
|
||||
words_per_row = 1
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 12
|
||||
num_words = 256
|
||||
words_per_row = 16
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 12
|
||||
num_words = 256
|
||||
words_per_row = 8
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 14
|
||||
num_words = 32
|
||||
words_per_row = 2
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 15
|
||||
num_words = 512
|
||||
words_per_row = 8
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 16
|
||||
num_words = 1024
|
||||
words_per_row = 16
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 17
|
||||
num_words = 1024
|
||||
words_per_row = 16
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 17
|
||||
num_words = 256
|
||||
words_per_row = 16
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 18
|
||||
num_words = 128
|
||||
words_per_row = 2
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 18
|
||||
num_words = 32
|
||||
words_per_row = 1
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 21
|
||||
num_words = 1024
|
||||
words_per_row = 4
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 22
|
||||
num_words = 512
|
||||
words_per_row = 16
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 23
|
||||
num_words = 1024
|
||||
words_per_row = 16
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 26
|
||||
num_words = 64
|
||||
words_per_row = 4
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 27
|
||||
num_words = 1024
|
||||
words_per_row = 4
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 27
|
||||
num_words = 256
|
||||
words_per_row = 8
|
||||
|
|
|
|||
|
|
@ -1,4 +1,9 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 27
|
||||
num_words = 512
|
||||
words_per_row = 4
|
||||
|
|
|
|||
|
|
@ -1,3 +1,8 @@
|
|||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from shared_config import *
|
||||
word_size = 32
|
||||
num_words = 1024
|
||||
|
|
@ -5,4 +10,4 @@ num_words = 1024
|
|||
output_extended_config = True
|
||||
output_datasheet_info = True
|
||||
netlist_only = True
|
||||
nominal_corner_only = True
|
||||
nominal_corner_only = True
|
||||
|
|
|
|||
|
|
@ -1,8 +1,13 @@
|
|||
from shared_config import *
|
||||
# See LICENSE for licensing information.
|
||||
#
|
||||
# Copyright (c) 2016-2023 Regents of the University of California, Santa Cruz
|
||||
# All rights reserved.
|
||||
#
|
||||
from .shared_config import *
|
||||
word_size = 32
|
||||
num_words = 2048
|
||||
|
||||
output_extended_config = True
|
||||
output_datasheet_info = True
|
||||
netlist_only = True
|
||||
nominal_corner_only = True
|
||||
nominal_corner_only = True
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue