32 Commits

Author SHA1 Message Date
Jan Kundrát
049b077ee4 Use real IP addresses for the US-based Cassinis
Change-Id: I158bb84261a56d71074155880c4359033b2f1044
2020-03-07 16:49:39 -08:00
Jan Kundrát
ab2080a805 Update IP addresses and hostnames for the OFC2020 demo
Change-Id: Ie8d30d56f94d1ce14f8ac62ceec7f0e57a3486b2
2020-02-12 17:32:07 +01:00
Jan Kundrát
8ab54e76df Merge branch 'develop' into experimental/2020-ofc
Change-Id: I4f7d3cc91734a03251b4ad4d82b05aad68d0ef5f
2020-02-12 17:18:50 +01:00
Jan Kundrát
f015c6abed ROADM module replacements 2020-01-07 16:29:15 +01:00
Jan Kundrát
71293c1c18 demo: reduce the spectrum so that it's safely and conveniently deep in the C-band 2019-11-12 20:26:08 +01:00
Jan Kundrát
bd7c70f902 demo: Fix ONOS dev-id mapping for Ams-L2
A duplicate key in the dict means that bad things happen.
2019-11-12 13:20:16 +01:00
Jan Kundrát
20c92d4338 demo: add an endpoint which return success so that ONOS can verify connectivity 2019-11-12 11:53:41 +01:00
Jan Kundrát
f0158e7202 demo: fix transponder name and type 2019-11-12 11:41:51 +01:00
Jan Kundrát
62408ddc98 demo: hardcode the device IP addresses 2019-11-11 16:48:53 +01:00
Jan Kundrát
b4f87b36db REST API: output detailed info about the reversed path for bidi requests 2019-11-08 15:27:21 +01:00
Jan Kundrát
9f49a115a1 Add a path-route-object with EDFA-specific per-channel power and output VOA settings
...once again. for the demo.
2019-11-08 13:00:40 +01:00
Jan Kundrát
c7d2305589 REST: return element type for EDFA, TXP and ROADM elements
...as requested by Andrea during today's call.
2019-11-08 12:50:11 +01:00
Jan Kundrát
5826a649de sync topology with Esther's proposal 2019-11-05 13:52:16 +01:00
EstherLerouzic
fa826391f6 Add some tests to support partial per degree target power definition
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-11-05 13:18:37 +01:00
EstherLerouzic
3481ba8ee3 add the degree info of next node during path propagation
when node is a roadm, add the degree info of next node during
path propagation.

Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-11-05 13:18:18 +01:00
EstherLerouzic
b4ab0b55de use the per degree target_pch_out_db for the target power in network build
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-11-05 13:17:02 +01:00
EstherLerouzic
0370b45d8a Add per degree power information in ROADM
- add the per degree info using the EXACT next node uid as identifier
  of the degree
- add the degree identifier on the propagate and call functions

Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-11-05 13:16:54 +01:00
EstherLerouzic
468e689094 Add per channel power target out
Works OK only for roadms that face the line.... but maybe a problem
for the express path ....
to be checked

Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-11-05 13:16:38 +01:00
Jan Kundrát
aafd82b16d Docker: run the TIP Summit 2019 demo by default
Here's a TL;DR of how to use this. First, start the container so that it
exports its port 5000 for incoming HTTP requests:

- docker run -P -it --rm telecominfraproject/oopt-gnpy-experimental:$SOME_VERSION

You'll have to replace `$SOME_VERSION` by acn actual version. There is
no default `latest` tag on this particular Docker repository.

Then find out what port Docker currently uses:

```console-session
~ # docker ps
CONTAINER ID        IMAGE                                                          COMMAND                  CREATED             STATUS              PORTS                     NAMES
6004c3a9b741        telecominfraproject/oopt-gnpy-experimental:v1.8-114-g7e0abc4   "/oopt-gnpy/.docker-…"   48 seconds ago      Up 44 seconds       0.0.0.0:32768->5000/tcp   eloquent_hawking
~ # docker port 6004c3a9b741
5000/tcp -> 0.0.0.0:32768
```

Path computation can then be requested like this:

- curl -v -X POST -H "Content-Type: application/json" -d @examples/2019-demo-services.json http://127.0.0.1:32768/gnpy-experimental

This one will try to compute two disjoint optical paths and output their
respective optical performance.
2019-10-28 17:02:08 +01:00
Jan Kundrát
60ee331153 demo: simplify the REST API interface
The topology will be provisioned out-of-band, so let's simplify the REST
API so that it reflects that design. Also, let's make it obvious that
the API is subject to change and should not be relied upon at this time.
It's meant to be an experimental interface with data I/O format which
*will* change as we adapt a proper YANG model for both directions.
2019-10-28 15:59:18 +01:00
Jan Kundrát
3a8ce74355 topologies and service requests for the TIP Summit demo
The examples/2019-generate-tip-demo.py helper script can be used to
generate a ring topology where each "ROADM node" consists of three
separate ROADMs and two pairs of booster+preamp EDFAs. This will be used
at the TIP Summit to show integration between ONOS and GNPy.

The topology *and the equipment library) more or less corresponds to the
CzechLight OLS that is planned for the exhibition.
2019-10-28 15:55:18 +01:00
Jan Kundrát
fd44463238 REST: do not use HTTP auth
I do not think that proof-of-concept demos should implement HTTP auth
because GNPy has no concept of access lists.  If people want to use this
in a "real scenario", they will likely wrap Python's HTTP server behind
a real HTTP reverse proxy, and they can then implement proper ACL at
that layer.
2019-10-28 15:45:36 +01:00
EstherLerouzic
84ba2da553 add 400 return with msg in case of service error
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-14 17:04:07 +01:00
EstherLerouzic
e693d96ca1 limit generators to support fused in preamps
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-14 15:52:49 +01:00
EstherLerouzic
81cb7f8133 corrections due to codacy report
- remove unused abort and marshall
- change variable names to conform to upper letter rule,
  [a-z_][a-z0-9_]{2,30}$
- add docstrings

Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-14 15:37:58 +01:00
EstherLerouzic
3471969956 add flesk_restfull and flask_httpauth packages to requirements
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-10 12:21:28 +01:00
EstherLerouzic
7a0985c362 add flask import in requirements
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-10 12:21:28 +01:00
EstherLerouzic
b79a9e2e67 example of result in json format
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-10 12:21:28 +01:00
EstherLerouzic
1e037fe6f5 add example for simple topology
on the same topology file find the triangle topology

        site_c
        /  \
       /    \
site_a ------ Site_b

and the simple parallel link
site_a ------ Site_b
        \  /
         --

this topo includes only sinple span hops and roadm have boosters and amplifiers

the serviceDemov1.json gives the example of how the requests must be formulated

- 0 simple one
- 1 request with the forced Span (case of parallel link)
- 2 request with the forced roadm (case of triangle topo)
- 3 and 4 request with the disjunction

Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-10 12:21:10 +01:00
EstherLerouzic
0897be57c1 use a default topology file when api input topo is empty
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-10 09:04:36 +01:00
EstherLerouzic
4172b06b19 Update service and result json templates
Signed-off-by: EstherLerouzic <esther.lerouzic@orange.com>
2019-10-10 09:04:36 +01:00
ahmed
32a4875e46 add the option "rest" to activate the api-rest
- add the function launch_cli to launch the "cli" mode
- add the the class Gnpy_API to launch the "api" mode
- modify the main to enable the launch of Gnpy with two
modes "rest" and "cli"

Signed-off-by: ahmed <ahmed.triki@orange.com>
2019-10-10 09:04:36 +01:00
157 changed files with 88956 additions and 104207 deletions

View File

@@ -1,2 +1 @@
[bandit]
skips: B101
skips: ['B101']

View File

@@ -1,9 +1 @@
comment: off
coverage:
status:
project:
default:
threshold: 5%
patch:
default:
only_pulls: true

View File

@@ -1,3 +1,3 @@
#!/bin/bash
cp -nr /oopt-gnpy/gnpy/example-data /shared
cp -nr /oopt-gnpy/examples /shared
exec "$@"

View File

@@ -5,17 +5,16 @@ set -e
IMAGE_NAME=telecominfraproject/oopt-gnpy
IMAGE_TAG=$(git describe --tags)
if [[ "${TRAVIS_BRANCH}" == "experimental/2019-summit" ]]; then
IMAGE_NAME=telecominfraproject/oopt-gnpy-experimental
fi
ALREADY_FOUND=0
docker pull ${IMAGE_NAME}:${IMAGE_TAG} && ALREADY_FOUND=1
if [[ $ALREADY_FOUND == 0 ]]; then
docker build . -t ${IMAGE_NAME}
docker tag ${IMAGE_NAME} ${IMAGE_NAME}:${IMAGE_TAG}
# shared directory setup: do not clobber the real data
mkdir trash
cd trash
docker run -it --rm --volume $(pwd):/shared ${IMAGE_NAME} gnpy-transmission-example
else
echo "Image ${IMAGE_NAME}:${IMAGE_TAG} already available, will just update the other tags"
fi
@@ -43,5 +42,11 @@ if [[ "${TRAVIS_PULL_REQUEST}" == "false" ]]; then
docker push ${IMAGE_NAME}:${IMAGE_TAG}
fi
docker push ${IMAGE_NAME}:stable
elif [[ "${TRAVIS_BRANCH}" == "experimental/2019-summit" ]]; then
echo "Publishing ad-hoc image for the TIP Summit demo"
do_docker_login
if [[ $ALREADY_FOUND == 0 ]]; then
docker push ${IMAGE_NAME}:${IMAGE_TAG}
fi
fi
fi

View File

@@ -1,7 +0,0 @@
# Thanks for contributing to GNPy
If it isn't much trouble, please send your contribution as patches to our Gerrit.
Here's [how to submit patches](https://review.gerrithub.io/Documentation/intro-gerrit-walkthrough-github.html), and here's a [list of stuff we are currently working on](https://review.gerrithub.io/q/project:Telecominfraproject/oopt-gnpy+status:open).
Just sign in via your existing GitHub account.
However, if you feel more comfortable with filing GitHub PRs, we can work with that too.

View File

@@ -1,112 +0,0 @@
on:
push:
pull_request:
branches:
- master
name: CI
jobs:
build:
name: Tox test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: fedora-python/tox-github-action@v0.4
with:
tox_env: ${{ matrix.tox_env }}
dnf_install: ${{ matrix.dnf_install }}
- uses: codecov/codecov-action@29386c70ef20e286228c72b668a06fd0e8399192
if: ${{ endswith(matrix.tox_env, '-cover') }}
with:
files: ${{ github.workspace }}/cover/coverage.xml
strategy:
matrix:
tox_env:
- py38
- py39
- py310-cover
include:
- tox_env: docs
dnf_install: graphviz
pypi:
needs: build
if: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && github.repository_owner == 'Telecominfraproject' }}
name: PyPI packaging
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
name: Install Python
with:
python-version: '3.10'
- uses: casperdcl/deploy-pypi@bb869aafd89f657ceaafe9561d3b5584766c0f95
with:
password: ${{ secrets.PYPI_API_TOKEN }}
pip: wheel -w dist/ --no-deps .
upload: true
docker:
needs: build
if: ${{ github.event_name == 'push' && (github.ref == 'refs/heads/master' || startsWith(github.ref, 'refs/tags/v')) && github.repository_owner == 'Telecominfraproject' }}
name: Docker image
runs-on: ubuntu-latest
steps:
- name: Log in to Docker Hub
uses: docker/login-action@v1
with:
username: jktjkt
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Extract tag name
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
id: extract_pretty_git
run: echo ::set-output name=GIT_DESC::$(git describe --tags)
- name: Build and push a container
uses: docker/build-push-action@v2
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
with:
context: .
push: true
tags: |
telecominfraproject/oopt-gnpy:${{ steps.extract_pretty_git.outputs.GIT_DESC }}
telecominfraproject/oopt-gnpy:master
- name: Extract tag name
if: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }}
id: extract_tag_name
run: echo ::set-output name=GIT_DESC::${GITHUB_REF/refs\/tags\//}
- name: Build and push a container
uses: docker/build-push-action@v2
if: ${{ github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') }}
with:
context: .
push: true
tags: |
telecominfraproject/oopt-gnpy:${{ steps.extract_tag_name.outputs.GIT_DESC }}
telecominfraproject/oopt-gnpy:latest
windows:
name: Tests on Windows
runs-on: windows-2019
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python_version }}
- run: |
pip install --editable .
pip install 'pytest>=6.2.5,<7'
pytest -vv
strategy:
matrix:
python_version:
- "3.10"

1
.gitignore vendored
View File

@@ -3,7 +3,6 @@ __pycache__/
*.py[cod]
*$py.class
.ipynb_checkpoints
.idea
# C extensions
*.so

View File

@@ -1,4 +0,0 @@
[gerrit]
host=review.gerrithub.io
project=Telecominfraproject/oopt-gnpy
defaultrebase=0

View File

@@ -1,3 +0,0 @@
queries:
- exclude: py/clear-text-logging-sensitive-data
- exclude: py/clear-text-storage-sensitive-data

View File

@@ -1,5 +1,4 @@
build:
image: latest
python:
version: 3.8
requirements_file: docs/requirements.txt
version: 3.6

View File

@@ -1,20 +1,20 @@
dist: focal
os: linux
dist: xenial
sudo: false
language: python
services: docker
python:
- "3.8"
- "3.9"
before_install:
- sudo apt-get -y install graphviz
- "3.6"
- "3.7"
install: skip
script:
- pip install --editable .
- python setup.py install
- pip install pytest-cov rstcheck
- pytest --cov-report=xml --cov=gnpy -v
- pip install -r docs/requirements.txt
- rstcheck --ignore-roles cite *.rst
- sphinx-build -W --keep-going docs/ x-throwaway-location
- pytest --cov-report=xml --cov=gnpy
- rstcheck --ignore-roles cite --ignore-directives automodule --recursive --ignore-messages '(Duplicate explicit target name.*)' .
- ./examples/transmission_main_example.py
- ./examples/path_requests_run.py
- ./examples/transmission_main_example.py examples/raman_edfa_example_network.json --sim examples/sim_params.json --show-channels
- sphinx-build docs/ x-throwaway-location
after_success:
- bash <(curl -s https://codecov.io/bash)
jobs:

View File

@@ -2,41 +2,7 @@
- project:
check:
jobs:
- tox-py38
- tox-py39
- tox-py310-cover
- tox-docs-f35
- coverage-diff:
voting: false
dependencies:
- tox-py310-cover-previous
- tox-py310-cover
vars:
coverage_job_name_previous: tox-py310-cover-previous
coverage_job_name_current: tox-py310-cover
- tox-linters-diff-n-report:
voting: false
- tox-py310-cover-previous
tag:
- noop
gate:
jobs:
- oopt-release-python:
secrets:
- secret: pypi-oopt-gnpy
name: pypi_info
pass-to-parent: true
- secret:
name: pypi-oopt-gnpy
data:
username: __token__
password: !encrypted/pkcs1-oaep
- Taod9JmSMtVAvC5ShSbB3UWuccktQvutdySrj0G7a1Nk4tKFQIdwDXEnBuLpHsZVvsU9Q
6uk4wRVQABDSdNNI/+M/1FwmZfoxuOXa02U5S1deuxW/rBHTxzYcuB8xriwhArBvTiDMk
zyWHVysgDsjlR+85h/DkEhvsaMRDLYWqFwYgXizMoGNKVkwDVIH+qkhBmbggQfDpcYPKT
1gq0d6fw0eKVJtO8+vonMEcE0sWZvHmZvSSu0H++gxoe1W/JtzbCteH3Ak0zktwBHI8Qt
WBqFvY3laad335tpkFJN5b949N+DP8svCWwRwXmkZlHplPYZWF6QpYbEEXL/6Q0H6VwL+
om4f7ybYpKe9Gl939uv2INnXaKe5EU6CMsSw40r2XZCjnSTjWOTgh9pUn2PsoHnqUlALW
VR4Z+ipnCrEbu8aTmX3ROcnwYNS7OXkq4uhwDU1u9QjzyMHet6NQQhwhGtimsTo9KhL4E
TEUNiRlbAgow9WOwM5r3vRzddO8T2HZZSGaWj75qNRX46XPQWRWgB7ItAwyXgwLZ8UzWl
HdztjS3D7Hlsqno3zxNOVlhA5/vl9uVnhFbJnMtUOJAB07YoTJOeR+LjQ0avx/VzopxXc
RA/WvJXVZSBrlAHY0+ip4wPZvdi4Ph90gpmvHJvoH82KVfp2j5jxzUhsage94I=
- noop

View File

@@ -7,7 +7,7 @@ To learn how to contribute, please see CONTRIBUTING.md
- Alessio Ferrari (Politecnico di Torino) <alessio.ferrari@polito.it>
- Anders Lindgren (Telia Company) <Anders.X.Lindgren@teliacompany.com>
- Andrea D'Amico (Politecnico di Torino) <andrea.damico@polito.it>
- Andrea d'Amico (Politecnico di Torino) <andrea.damico@polito.it>
- Brian Taylor (Facebook) <briantaylor@fb.com>
- David Boertjes (Ciena) <dboertje@ciena.com>
- Diego Landa (Facebook) <dlanda@fb.com>

View File

@@ -1,8 +1,8 @@
FROM python:3.9-slim
FROM python:3.7-slim
COPY . /oopt-gnpy
WORKDIR /oopt-gnpy
RUN apt update; apt install -y git
RUN pip install .
WORKDIR /shared/example-data
RUN python setup.py install
WORKDIR /shared
ENTRYPOINT ["/oopt-gnpy/.docker-entry.sh"]
CMD ["/bin/bash"]
CMD ["python", "examples/path_requests_run.py", "examples/2019-demo-topology.json", "examples/2019-demo-services.json", "examples/2019-demo-equipment.json", "--rest"]
EXPOSE 5000

View File

@@ -1,9 +1,8 @@
.. _excel:
Excel (XLS, XLSX) input files
=============================
How to prepare the Excel input file
-----------------------------------
``gnpy-transmission-example`` gives the possibility to use an excel input file instead of a json file. The program then will generate the corresponding json file for you.
`examples/transmission_main_example.py <examples/transmission_main_example.py>`_ gives the possibility to use an excel input file instead of a json file. The program then will generate the corresponding json file for you.
The file named 'meshTopologyExampleV2.xls' is an example.
@@ -17,8 +16,6 @@ In order to work the excel file MUST contain at least 2 sheets:
- Eqt
- Service
.. _excel-nodes-sheet:
Nodes sheet
-----------
@@ -37,7 +34,7 @@ Each line represents a 'node' (ROADM site or an in line amplifier site ILA or a
- If filled, it can take "ROADM", "FUSED" or "ILA" values. If another string is used, it will be considered as not filled. FUSED means that ingress and egress spans will be fused together.
- *State*, *Country*, *Region* are not mandatory.
"Region" is a holdover from the CORONET topology reference file `CORONET_Global_Topology.xlsx <gnpy/example-data/CORONET_Global_Topology.xlsx>`_. CORONET separates its network into geographical regions (Europe, Asia, Continental US.) This information is not used by gnpy.
"Region" is a holdover from the CORONET topology reference file `CORONET_Global_Topology.xls <examples/CORONET_Global_Topology.xls>`_. CORONET separates its network into geographical regions (Europe, Asia, Continental US.) This information is not used by gnpy.
- *Longitude*, *Latitude* are not mandatory. If filled they should contain numbers.
@@ -47,8 +44,6 @@ Each line represents a 'node' (ROADM site or an in line amplifier site ILA or a
**There MUST NOT be empty line(s) between two nodes lines**
.. _excel-links-sheet:
Links sheet
-----------
@@ -85,11 +80,11 @@ and a fiber span from node3 to node6::
- If filled it MUST contain numbers. If empty it is replaced by a default "80" km value.
- If value is below 150 km, it is considered as a single (bidirectional) fiber span.
- If value is over 150 km the `gnpy-transmission-example`` program will automatically suppose that intermediate span description are required and will generate fiber spans elements with "_1","_2", ... trailing strings which are not visible in the json output. The reason for the splitting is that current edfa usually do not support large span loss. The current assumption is that links larger than 150km will require intermediate amplification. This value will be revisited when Raman amplification is added”
- If value is over 150 km the `transmission_main_example.py <examples/transmission_main_example.py>`_ program will automatically suppose that intermediate span description are required and will generate fiber spans elements with "_1","_2", ... trailing strings which are not visible in the json output. The reason for the splitting is that current edfa usually do not support large span loss. The current assumption is that links larger than 150km will require intermediate amplification. This value will be revisited when Raman amplification is added”
- **Fiber type** is not mandatory.
If filled it must contain types listed in `eqpt_config.json <gnpy/example-data/eqpt_config.json>`_ in "Fiber" list "type_variety".
If filled it must contain types listed in `eqpt_config.json <examples/eqpt_config.json>`_ in "Fiber" list "type_variety".
If not filled it takes "SSMF" as default value.
- **Lineic att** is not mandatory.
@@ -118,22 +113,18 @@ and a fiber span from node3 to node6::
(in progress)
.. _excel-equipment-sheet:
Eqpt sheet
----------
The equipment sheet (named "Eqpt") is optional.
If provided, it specifies types of boosters and preamplifiers for all ROADM degrees of all ROADM nodes, and for all ILA nodes.
Eqt sheet is optional. It lists the amplifiers types and characteristics on each degree of the *Node A* line.
Eqpt sheet must contain twelve columns::
This sheet contains twelve columns::
<-- east cable from a to z --> <-- west from z to a -->
Node A ; Node Z ; amp type ; att_in ; amp gain ; tilt ; att_out ; amp type ; att_in ; amp gain ; tilt ; att_out
<-- east cable from a to z --> <-- west from z to a -->
Node A ; Node Z ; amp type ; att_in ; amp gain ; tilt ; att_out ; delta_p ; amp type ; att_in ; amp gain ; tilt ; att_out ; delta_p
If the sheet is present, it MUST have as many lines as egress directions of ROADMs defined in Links Sheet.
If the sheet is present, it MUST have as many lines as there are egress directions of ROADMs defined in Links Sheet, and all ILAs.
For example, consider the following list of links (A, B and C being a ROADM and amp# ILAs):
For example, consider the following list of links (A,B and C being a ROADM and amp# ILAs)
::
@@ -145,8 +136,8 @@ For example, consider the following list of links (A, B and C being a ROADM and
then Eqpt sheet should contain:
- one line for each ILAs: amp1, amp2, amp3
- one line for each one-degree ROADM (B and C in this example)
- two lines for each two-degree ROADM (just the ROADM A)
- one line for each degree 1 ROADMs B and C
- two lines for ROADM A which is a degree 2 ROADM
::
@@ -159,11 +150,11 @@ then Eqpt sheet should contain:
C - amp3
In case you already have filled Nodes and Links sheets `create_eqpt_sheet.py <gnpy/example-data/create_eqpt_sheet.py>`_ can be used to automatically create a template for the mandatory entries of the list.
In case you already have filled Nodes and Links sheets `create_eqpt_sheet.py <examples/create_eqpt_sheet.py>`_ can be used to automatically create a template for the mandatory entries of the list.
.. code-block:: shell
$ cd $(gnpy-example-data)
$ cd examples
$ python create_eqpt_sheet.py meshTopologyExampleV2.xls
This generates a text file meshTopologyExampleV2_eqt_sheet.txt whose content can be directly copied into the Eqt sheet of the excel file. The user then can fill the values in the rest of the columns.
@@ -176,7 +167,7 @@ This generates a text file meshTopologyExampleV2_eqt_sheet.txt whose content ca
- **Node Z** is mandatory. It is the egress direction from the *Node A* site. Multiple Links between the same Node A and NodeZ is not supported.
- **amp type** is not mandatory.
If filled it must contain types listed in `eqpt_config.json <gnpy/example-data/eqpt_config.json>`_ in "Edfa" list "type_variety".
If filled it must contain types listed in `eqpt_config.json <examples/eqpt_config.json>`_ in "Edfa" list "type_variety".
If not filled it takes "std_medium_gain" as default value.
If filled with fused, a fused element with 0.0 dB loss will be placed instead of an amplifier. This might be used to avoid booster amplifier on a ROADM direction.
@@ -184,24 +175,19 @@ This generates a text file meshTopologyExampleV2_eqt_sheet.txt whose content ca
If not filled, it will be determined with design rules in the convert.py file.
If filled, it must contain positive numbers.
- *att_in* and *att_out* are not mandatory and are not used yet. They are the value of the attenuator at input and output of amplifier (in dB).
- *att_in* and *att_out* are not mandatory and are not used yet. They are the value of the attenautor at input and output of amplifier (in dB).
If filled they must contain positive numbers.
- **tilt**, in dB, is not mandatory. It is the target gain tilt over the full amplfifier bandwidth and is defined with regard to wavelength, i.e. negative tilt means lower gain
for higher wavelengths (lower frequencies). If not filled, the default value is 0.
- **delta_p**, in dBm, is not mandatory. If filled it is used to set the output target power per channel at the output of the amplifier, if power_mode is True. The output power is then set to power_dbm + delta_power.
- *tilt* --TODO--
# to be completed #
(in progress)
.. _excel-service-sheet:
Service sheet
-------------
Service sheet is optional. It lists the services for which path and feasibility must be computed with ``gnpy-path-request``.
Service sheet is optional. It lists the services for which path and feasibility must be computed with path_requests_run.py.
Service sheet must contain 11 columns::
@@ -230,4 +216,36 @@ Service sheet must contain 11 columns::
- path: is the set of ROADM nodes that must be used by the path. It must contain the list of ROADM names that the path must cross. TODO : only ROADM nodes are accepted in this release. Relax this with any type of nodes. If filled it must contain ROADM ids separated by ' | '. Exact names are required.
- is loose? 'no' value means that the list of nodes should be strictly followed, while any other value means that the constraint may be relaxed if the node is not reachable.
- **path bandwidth** is mandatory. It is the amount of capacity required between source and destination in Gbit/s. Value should be positive (non zero). It is used to compute the amount of required spectrum for the service.
- ** path bandwidth** is optional. It is the amount of capacity required between source and destination in Gbit/s. Default value is 0.0 Gbit/s.
path_requests_run.py
------------------------
**Usage**: path_requests_run.py [-h] [-v] [-o OUTPUT]
[network_filename xls or json] [service_filename xls or json] [eqpt_filename json]
.. code-block:: shell
$ cd examples
$ python path_requests_run.py meshTopologyExampleV2.xls service_file.json eqpt_file -o output_file.json
A function that computes performances for a list of services provided in the service file (accepts json or excel format.
if the service <file.xls> is in xls format, path_requests_run.py converts it to a json file <file_services.json> following the Yang model for requesting Path Computation defined in `draft-ietf-teas-yang-path-computation-01.txt <https://www.ietf.org/id/draft-ietf-teas-yang-path-computation-01.pdf>`_. For PSE use, additional fields with trx type and mode have been added to the te-bandwidth field.
A template for the json file can be found here: `service_template.json <service_template.json>`_
If no output file is given, the computation is shown on standard output for demo.
If a file is specified with the optional -o argument, the result of the computation is converted into a json format following the Yang model for requesting Path Computation defined in `draft-ietf-teas-yang-path-computation-01.txt <https://www.ietf.org/id/draft-ietf-teas-yang-path-computation-01.pdf>`_. TODO: verify that this implementation is correct + give feedback to ietf on what is missing for our specific application.
A template for the result of computation json file can be found here: `path_result_template.json <path_result_template.json>`_
Important note: path_requests_run.py is not a network dimensionning tool : each service does not reserve spectrum, or occupy ressources such as transponders. It only computes path feasibility assuming the spectrum (between defined frequencies) is loaded with "nb of channels" spaced by "spacing" values as specified in the system parameters input in the service file, each cannel having the same characteristics in terms of baudrate, format, ... as the service transponder. The transceiver element acts as a "logical starting/stopping point" for the spectral information propagation. At that point it is not meant to represent the capacity of add drop ports
As a result transponder type is not part of the network info. it is related to the list of services requests.
In a next step we plan to provide required features to enable dimensionning : alocation of ressources, counting channels, limitation of the number of channels, ...
(in progress)

View File

@@ -1,29 +0,0 @@
# GNPy: Optical Route Planning and DWDM Network Optimization
[![Install via pip](https://img.shields.io/pypi/v/gnpy)](https://pypi.org/project/gnpy/)
[![Python versions](https://img.shields.io/pypi/pyversions/gnpy)](https://pypi.org/project/gnpy/)
[![Documentation status](https://readthedocs.org/projects/gnpy/badge/?version=master)](http://gnpy.readthedocs.io/en/master/?badge=master)
[![GitHub Workflow Status](https://img.shields.io/github/workflow/status/Telecominfraproject/oopt-gnpy/build)](https://github.com/Telecominfraproject/oopt-gnpy/actions/workflows/main.yml)
[![Gerrit](https://img.shields.io/badge/patches-via%20Gerrit-blue)](https://review.gerrithub.io/q/project:Telecominfraproject/oopt-gnpy+is:open)
[![Contributors](https://img.shields.io/github/contributors-anon/Telecominfraproject/oopt-gnpy)](https://github.com/Telecominfraproject/oopt-gnpy/graphs/contributors)
[![Code Quality via LGTM.com](https://img.shields.io/lgtm/grade/python/github/Telecominfraproject/oopt-gnpy)](https://lgtm.com/projects/g/Telecominfraproject/oopt-gnpy/)
[![Code Coverage via codecov](https://img.shields.io/codecov/c/github/Telecominfraproject/oopt-gnpy)](https://codecov.io/gh/Telecominfraproject/oopt-gnpy)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3458319.svg)](https://doi.org/10.5281/zenodo.3458319)
GNPy is an open-source, community-developed library for building route planning and optimization tools in real-world mesh optical networks.
We are a consortium of operators, vendors, and academic researchers sponsored via the [Telecom Infra Project](http://telecominfraproject.com)'s [OOPT/PSE](https://telecominfraproject.com/open-optical-packet-transport) working group.
Together, we are building this tool for rapid development of production-grade route planning tools which is easily extensible to include custom network elements and performant to the scale of real-world mesh optical networks.
![GNPy with an OLS system](docs/images/GNPy-banner.png)
## Quick Start
Install either via [Docker](https://gnpy.readthedocs.io/en/master/install.html#using-prebuilt-docker-images), or as a [Python package](https://gnpy.readthedocs.io/en/master/install.html#using-python-on-your-computer).
Read our [documentation](https://gnpy.readthedocs.io/), learn from the demos, and [get in touch with us](https://github.com/Telecominfraproject/oopt-gnpy/discussions).
This example demonstrates how GNPy can be used to check the expected SNR at the end of the line by varying the channel input power:
![Running a simple simulation example](https://telecominfraproject.github.io/oopt-gnpy/docs/images/transmission_main_example.svg)
GNPy can do much more, including acting as a Path Computation Engine, tracking bandwidth requests, or advising the SDN controller about a best possible path through a large DWDM network.
Learn more about this [in the documentation](https://gnpy.readthedocs.io/).

View File

@@ -1,40 +1,211 @@
.. _json:
.. image:: docs/images/GNPy-banner.png
:width: 100%
:align: left
:alt: GNPy with an OLS system
JSON Input Files
================
====================================================================
`gnpy`: mesh optical network route planning and optimization library
====================================================================
GNPy uses a set of JSON files for modeling the network.
Some data (such as network topology or the service requests) can be also passed via :ref:`XLS files<excel-service-sheet>`.
|docs| |build| |doi|
Equipment Library
-----------------
**`gnpy` is an open-source, community-developed library for building route
planning and optimization tools in real-world mesh optical networks.**
Design and transmission parameters are defined in a dedicated json file.
By default, this information is read from `gnpy/example-data/eqpt_config.json <https://github.com/Telecominfraproject/oopt-gnpy/blob/master/gnpy/example-data/eqpt_config.json>`_.
This file defines the equipment libraries that can be customized (EDFAs, fibers, and transceivers).
`gnpy <http://github.com/telecominfraproject/oopt-gnpy>`__ is:
--------------------------------------------------------------
It also defines the simulation parameters (spans, ROADMs, and the spectral information to transmit.)
- a sponsored project of the `OOPT/PSE <https://telecominfraproject.com/open-optical-packet-transport/>`_ working group of the `Telecom Infra Project <http://telecominfraproject.com>`_
- fully community-driven, fully open source library
- driven by a consortium of operators, vendors, and academic researchers
- intended for rapid development of production-grade route planning tools
- easily extensible to include custom network elements
- performant to the scale of real-world mesh optical networks
EDFA
~~~~
Documentation: https://gnpy.readthedocs.io
Get In Touch
~~~~~~~~~~~~
There are `weekly calls <https://telecominfraproject.workplace.com/events/702894886867547/>`__ about our progress.
Newcomers, users and telecom operators are especially welcome there.
We encourage all interested people outside the TIP to `join the project <https://telecominfraproject.com/apply-for-membership/>`__.
Branches and Tagged Releases
----------------------------
- all releases are `available via GitHub <https://github.com/Telecominfraproject/oopt-gnpy/releases>`_
- the `master <https://github.com/Telecominfraproject/oopt-gnpy/tree/master>`_ branch contains stable, `validated code <https://github.com/Telecominfraproject/oopt-gnpy/wiki/Testing-for-Quality>`_. It is updated from develop on a release schedule determined by the OOPT-PSE Working Group.
- the `develop <https://github.com/Telecominfraproject/oopt-gnpy/tree/develop>`_ branch contains the latest code under active development, which may not be fully validated and tested.
How to Install
--------------
Using prebuilt Docker images
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Our `Docker images <https://hub.docker.com/r/telecominfraproject/oopt-gnpy>`_ contain everything needed to run all examples from this guide.
Docker transparently fetches the image over the network upon first use.
On Linux and Mac, run:
.. code-block:: shell-session
$ docker run -it --rm --volume $(pwd):/shared telecominfraproject/oopt-gnpy
root@bea050f186f7:/shared/examples#
On Windows, launch from Powershell as:
.. code-block:: powershell
PS C:\> docker run -it --rm --volume ${PWD}:/shared telecominfraproject/oopt-gnpy
root@89784e577d44:/shared/examples#
In both cases, a directory named ``examples/`` will appear in your current working directory.
GNPy automaticallly populates it with example files from the current release.
Remove that directory if you want to start from scratch.
Using Python on your computer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Note**: `gnpy` supports Python 3 only. Python 2 is not supported.
`gnpy` requires Python ≥3.6
**Note**: the `gnpy` maintainers strongly recommend the use of Anaconda for
managing dependencies.
It is recommended that you use a "virtual environment" when installing `gnpy`.
Do not install `gnpy` on your system Python.
We recommend the use of the `Anaconda Python distribution <https://www.anaconda.com/download>`_ which comes with many scientific computing
dependencies pre-installed. Anaconda creates a base "virtual environment" for
you automatically. You can also create and manage your ``conda`` "virtual
environments" yourself (see:
https://conda.io/docs/user-guide/tasks/manage-environments.html)
To activate your Anaconda virtual environment, you may need to do the
following:
.. code-block:: shell
$ source /path/to/anaconda/bin/activate # activate Anaconda base environment
(base) $ # note the change to the prompt
You can check which Anaconda environment you are using with:
.. code-block:: shell
(base) $ conda env list # list all environments
# conda environments:
#
base * /src/install/anaconda3
(base) $ echo $CONDA_DEFAULT_ENV # show default environment
base
You can check your version of Python with the following. If you are using
Anaconda's Python 3, you should see similar output as below. Your results may
be slightly different depending on your Anaconda installation path and the
exact version of Python you are using.
.. code-block:: shell
$ which python # check which Python executable is used
/path/to/anaconda/bin/python
$ python -V # check your Python version
Python 3.6.5 :: Anaconda, Inc.
From within your Anaconda Python 3 environment, you can clone the master branch
of the `gnpy` repo and install it with:
.. code-block:: shell
$ git clone https://github.com/Telecominfraproject/oopt-gnpy # clone the repo
$ cd oopt-gnpy
$ python setup.py install # install
To test that `gnpy` was successfully installed, you can run this command. If it
executes without a ``ModuleNotFoundError``, you have successfully installed
`gnpy`.
.. code-block:: shell
$ python -c 'import gnpy' # attempt to import gnpy
$ pytest # run tests
Instructions for First Use
--------------------------
``gnpy`` is a library for building route planning and optimization tools.
It ships with a number of example programs. Release versions will ship with
fully-functional programs.
**Note**: *If you are a network operator or involved in route planning and
optimization for your organization, please contact project maintainer Jan
Kundrát <jan.kundrat@telecominfraproject.com>. gnpy is looking for users with
specific, delineated use cases to drive requirements for future
development.*
This example demonstrates how GNPy can be used to check the expected SNR at the end of the line by varying the channel input power:
.. image:: https://telecominfraproject.github.io/oopt-gnpy/docs/images/transmission_main_example.svg
:width: 100%
:align: left
:alt: Running a simple simulation example
:target: https://asciinema.org/a/252295
By default, this script operates on a single span network defined in
`examples/edfa_example_network.json <examples/edfa_example_network.json>`_
You can specify a different network at the command line as follows. For
example, to use the CORONET Global network defined in
`examples/CORONET_Global_Topology.json <examples/CORONET_Global_Topology.json>`_:
.. code-block:: shell-session
$ ./examples/transmission_main_example.py examples/CORONET_Global_Topology.json
It is also possible to use an Excel file input (for example
`examples/CORONET_Global_Topology.xls <examples/CORONET_Global_Topology.xls>`_).
The Excel file will be processed into a JSON file with the same prefix. For
further instructions on how to prepare the Excel input file, see
`Excel_userguide.rst <Excel_userguide.rst>`_.
The main transmission example will calculate the average signal OSNR and SNR
across network elements (transceiver, ROADMs, fibers, and amplifiers)
between two transceivers selected by the user. Additional details are provided by doing ``transmission_main_example.py -h``. (By default, for the CORONET Global
network, it will show the transmission of spectral information between Abilene and Albany)
This script calculates the average signal OSNR = |OSNR| and SNR = |SNR|.
.. |OSNR| replace:: P\ :sub:`ch`\ /P\ :sub:`ase`
.. |SNR| replace:: P\ :sub:`ch`\ /(P\ :sub:`nli`\ +\ P\ :sub:`ase`)
|Pase| is the amplified spontaneous emission noise, and |Pnli| the non-linear
interference noise.
.. |Pase| replace:: P\ :sub:`ase`
.. |Pnli| replace:: P\ :sub:`nli`
Further Instructions for Use (`transmission_main_example.py`, `path_requests_run.py`)
-------------------------------------------------------------------------------------
Design and transmission parameters are defined in a dedicated json file. By
default, this information is read from `examples/eqpt_config.json
<examples/eqpt_config.json>`_. This file defines the equipment libraries that
can be customized (EDFAs, fibers, and transceivers).
It also defines the simulation parameters (spans, ROADMs, and the spectral
information to transmit.)
The EDFA equipment library is a list of supported amplifiers. New amplifiers
can be added and existing ones removed. Three different noise models are available:
1. ``'type_def': 'variable_gain'`` is a simplified model simulating a 2-coil EDFA with internal, input and output VOAs.
The NF vs gain response is calculated accordingly based on the input parameters: ``nf_min``, ``nf_max``, and ``gain_flatmax``.
It is not a simple interpolation but a 2-stage NF calculation.
2. ``'type_def': 'fixed_gain'`` is a fixed gain model.
`NF == Cte == nf0` if `gain_min < gain < gain_flatmax`
3. ``'type_def': 'openroadm'`` models the incremental OSNR contribution as a function of input power.
It is suitable for inline amplifiers that conform to the OpenROADM specification.
The input parameters are coefficients of the :ref:`third-degree polynomial<ext-nf-model-polynomial-OSNR-OpenROADM>`.
4. ``'type_def': 'openroadm_preamp'`` and ``openroadm_booster`` approximate the :ref:`preamp and booster within an OpenROADM network<ext-nf-model-noise-mask-OpenROADM>`.
No extra parameters specific to the NF model are accepted.
5. ``'type_def': 'advanced_model'`` is an advanced model.
A detailed JSON configuration file is required (by default `gnpy/example-data/std_medium_gain_advanced_config.json <https://github.com/Telecominfraproject/oopt-gnpy/blob/master/gnpy/example-data/std_medium_gain_advanced_config.json>`_).
It uses a 3rd order polynomial where NF = f(gain), NF_ripple = f(frequency), gain_ripple = f(frequency), N-array dgt = f(frequency).
Compared to the previous models, NF ripple and gain ripple are modelled.
1. ``'type_def': 'variable_gain'`` is a simplified model simulating a 2-coil EDFA with internal, input and output VOAs. The NF vs gain response is calculated accordingly based on the input parameters: ``nf_min``, ``nf_max``, and ``gain_flatmax``. It is not a simple interpolation but a 2-stage NF calculation.
2. ``'type_def': 'fixed_gain'`` is a fixed gain model. `NF == Cte == nf0` if `gain_min < gain < gain_flatmax`
3. ``'type_def': None`` is an advanced model. A detailed JSON configuration file is required (by default `examples/std_medium_gain_advanced_config.json <examples/std_medium_gain_advanced_config.json>`_). It uses a 3rd order polynomial where NF = f(gain), NF_ripple = f(frequency), gain_ripple = f(frequency), N-array dgt = f(frequency). Compared to the previous models, NF ripple and gain ripple are modelled.
For all amplifier models:
@@ -56,82 +227,24 @@ For all amplifier models:
| | | Excel template topology files.) |
+------------------------+-----------+-----------------------------------------+
Fiber
~~~~~
The fiber library currently describes SSMF and NZDF but additional fiber types can be entered by the user following the same model:
+----------------------+-----------+------------------------------------------+
| field | type | description |
+======================+===========+==========================================+
| ``type_variety`` | (string) | a unique name to ID the fiber in the |
| | | JSON or Excel template topology input |
| | | file |
+----------------------+-----------+------------------------------------------+
| ``dispersion`` | (number) | In :math:`s \times m^{-1} \times m^{-1}`.|
+----------------------+-----------+------------------------------------------+
| ``dispersion_slope`` | (number) | In :math:`s \times m^{-1} \times m^{-1} |
| | | \times m^{-1}` |
+----------------------+-----------+------------------------------------------+
| ``effective_area`` | (number) | Effective area of the fiber (not just |
| | | the MFD circle). This is the |
| | | :math:`A_{eff}`, see e.g., the |
| | | `Corning whitepaper on MFD/EA`_. |
| | | Specified in :math:`m^{2}`. |
+----------------------+-----------+------------------------------------------+
| ``gamma`` | (number) | Coefficient :math:`\gamma = 2\pi\times |
| | | n^2/(\lambda*A_{eff})`. |
| | | If not provided, this will be derived |
| | | from the ``effective_area`` |
| | | :math:`A_{eff}`. |
| | | In :math:`w^{-1} \times m^{-1}`. |
+----------------------+-----------+------------------------------------------+
| ``pmd_coef`` | (number) | Polarization mode dispersion (PMD) |
| | | coefficient. In |
| | | :math:`s\times\sqrt{m}^{-1}`. |
+----------------------+-----------+------------------------------------------+
| ``lumped_losses`` | (array) | Places along the fiber length with extra |
| | | losses. Specified as a loss in dB at |
| | | each relevant position (in km): |
| | | ``{"position": 10, "loss": 1.5}``) |
+----------------------+-----------+------------------------------------------+
.. _Corning whitepaper on MFD/EA: https://www.corning.com/microsites/coc/oem/documents/specialty-fiber/WP7071-Mode-Field-Diam-and-Eff-Area.pdf
RamanFiber
~~~~~~~~~~
The RamanFiber can be used to simulate Raman amplification through dedicated Raman pumps. The Raman pumps must be listed
in the key ``raman_pumps`` within the RamanFiber ``operational`` dictionary. The description of each Raman pump must
contain the following:
+---------------------------+-----------+------------------------------------------------------------+
| field | type | description |
+===========================+===========+============================================================+
| ``power`` | (number) | Total pump power in :math:`W` |
| | | considering a depolarized pump |
+---------------------------+-----------+------------------------------------------------------------+
| ``frequency`` | (number) | Pump central frequency in :math:`Hz` |
+---------------------------+-----------+------------------------------------------------------------+
| ``propagation_direction`` | (number) | The pumps can propagate in the same or opposite direction |
| | | with respect the signal. Valid choices are ``coprop`` and |
| | | ``counterprop``, respectively |
+---------------------------+-----------+------------------------------------------------------------+
Beside the list of Raman pumps, the RamanFiber ``operational`` dictionary must include the ``temperature`` that affects
the amplified spontaneous emission noise generated by the Raman amplification.
As the loss coefficient significantly varies outside the C-band, where the Raman pumps are usually placed,
it is suggested to include an estimation of the loss coefficient for the Raman pump central frequencies within
a dictionary-like definition of the ``RamanFiber.params.loss_coef``
(e.g. ``loss_coef = {"value": [0.18, 0.18, 0.20, 0.20], "frequency": [191e12, 196e12, 200e12, 210e12]}``).
Transceiver
~~~~~~~~~~~
+----------------------+-----------+-----------------------------------------+
| field | type | description |
+======================+===========+=========================================+
| ``type_variety`` | (string) | a unique name to ID the fiber in the |
| | | JSON or Excel template topology input |
| | | file |
+----------------------+-----------+-----------------------------------------+
| ``dispersion`` | (number) | (s.m-1.m-1) |
+----------------------+-----------+-----------------------------------------+
| ``gamma`` | (number) | 2pi.n2/(lambda*Aeff) (w-2.m-1) |
+----------------------+-----------+-----------------------------------------+
The transceiver equipment library is a list of supported transceivers. New
transceivers can be added and existing ones removed at will by the user. It is
used to determine the service list path feasibility when running the
``gnpy-path-request`` script.
`path_request_run.py routine <examples/path_request_run.py>`_.
+----------------------+-----------+-----------------------------------------+
| field | type | description |
@@ -140,7 +253,7 @@ used to determine the service list path feasibility when running the
| | | the JSON or Excel template topology |
| | | input file |
+----------------------+-----------+-----------------------------------------+
| ``frequency`` | (number) | Min/max central channel frequency. |
| ``frequency`` | (number) | Min/max as below. |
+----------------------+-----------+-----------------------------------------+
| ``mode`` | (number) | A list of modes supported by the |
| | | transponder. New modes can be added at |
@@ -162,123 +275,30 @@ The modes are defined as follows:
+----------------------+-----------+-----------------------------------------+
| ``bit_rate`` | (number) | in bit/s |
+----------------------+-----------+-----------------------------------------+
| ``roll_off`` | (number) | Pure number between 0 and 1. TX signal |
| | | roll-off shape. Used by Raman-aware |
| | | simulation code. |
| ``roll_off`` | (number) | Not used. |
+----------------------+-----------+-----------------------------------------+
| ``tx_osnr`` | (number) | In dB. OSNR out from transponder. |
+----------------------+-----------+-----------------------------------------+
| ``cost`` | (number) | Arbitrary unit |
+----------------------+-----------+-----------------------------------------+
ROADM
~~~~~
Simulation parameters are defined as follows.
The user can only modify the value of existing parameters:
Auto-design automatically creates EDFA amplifier network elements when they are
missing, after a fiber, or between a ROADM and a fiber. This auto-design
functionality can be manually and locally deactivated by introducing a ``Fused``
network element after a ``Fiber`` or a ``Roadm`` that doesn't need amplification.
The amplifier is chosen in the EDFA list of the equipment library based on
gain, power, and NF criteria. Only the EDFA that are marked
``'allowed_for_design': true`` are considered.
+--------------------------+-----------+---------------------------------------------+
| field | type | description |
+==========================+===========+=============================================+
| ``target_pch_out_db`` | (number) | Auto-design sets the ROADM egress channel |
| | | power. This reflects typical control loop |
| | | algorithms that adjust ROADM losses to |
| | | equalize channels (eg coming from different |
| | | ingress direction or add ports) |
| | | This is the default value |
| | | Roadm/params/target_pch_out_db if no value |
| | | is given in the ``Roadm`` element in the |
| | | topology input description. |
| | | This default value is ignored if a |
| | | params/target_pch_out_db value is input in |
| | | the topology for a given ROADM. |
+--------------------------+-----------+---------------------------------------------+
| ``add_drop_osnr`` | (number) | OSNR contribution from the add/drop ports |
+--------------------------+-----------+---------------------------------------------+
| ``pmd`` | (number) | Polarization mode dispersion (PMD). (s) |
+--------------------------+-----------+---------------------------------------------+
| ``restrictions`` | (dict of | If non-empty, keys ``preamp_variety_list`` |
| | strings) | and ``booster_variety_list`` represent |
| | | list of ``type_variety`` amplifiers which |
| | | are allowed for auto-design within ROADM's |
| | | line degrees. |
| | | |
| | | If no booster should be placed on a degree, |
| | | insert a ``Fused`` node on the degree |
| | | output. |
+--------------------------+-----------+---------------------------------------------+
For amplifiers defined in the topology JSON input but whose ``gain = 0``
(placeholder), auto-design will set its gain automatically: see ``power_mode`` in
the ``Spans`` library to find out how the gain is calculated.
Global parameters
-----------------
The following options are still defined in ``eqpt_config.json`` for legacy reasons, but
they do not correspond to tangible network devices.
Auto-design automatically creates EDFA amplifier network elements when they are missing, after a fiber, or between a ROADM and a fiber.
This auto-design functionality can be manually and locally deactivated by introducing a ``Fused`` network element after a ``Fiber`` or a ``Roadm`` that doesn't need amplification.
The amplifier is chosen in the EDFA list of the equipment library based on gain, power, and NF criteria.
Only the EDFA that are marked ``'allowed_for_design': true`` are considered.
For amplifiers defined in the topology JSON input but whose ``gain = 0`` (placeholder), auto-design will set its gain automatically: see ``power_mode`` in the ``Spans`` library to find out how the gain is calculated.
The file ``sim_params.json`` contains the tuning parameters used within both the ``gnpy.science_utils.RamanSolver`` and
the ``gnpy.science_utils.NliSolver`` for the evaluation of the Raman profile and the NLI generation, respectively.
+---------------------------------------------+-----------+---------------------------------------------+
| field | type | description |
+=============================================+===========+=============================================+
| ``raman_params.flag`` | (boolean) | Enable/Disable the Raman effect that |
| | | produces a power transfer from higher to |
| | | lower frequencies. |
| | | In general, considering the Raman effect |
| | | provides more accurate results. It is |
| | | mandatory when Raman amplification is |
| | | included in the simulation |
+---------------------------------------------+-----------+---------------------------------------------+
| ``raman_params.result_spatial_resolution`` | (number) | Spatial resolution of the output |
| | | Raman profile along the entire fiber span. |
| | | This affects the accuracy and the |
| | | computational time of the NLI |
| | | calculation when the GGN method is used: |
| | | smaller the spatial resolution higher both |
| | | the accuracy and the computational time. |
| | | In C-band simulations, with input power per |
| | | channel around 0 dBm, a suggested value of |
| | | spatial resolution is 10e3 m |
+---------------------------------------------+-----------+---------------------------------------------+
| ``raman_params.solver_spatial_resolution`` | (number) | Spatial step for the iterative solution |
| | | of the first order differential equation |
| | | used to calculate the Raman profile |
| | | along the entire fiber span. |
| | | This affects the accuracy and the |
| | | computational time of the evaluated |
| | | Raman profile: |
| | | smaller the spatial resolution higher both |
| | | the accuracy and the computational time. |
| | | In C-band simulations, with input power per |
| | | channel around 0 dBm, a suggested value of |
| | | spatial resolution is 100 m |
+---------------------------------------------+-----------+---------------------------------------------+
| ``nli_params.method`` | (string) | Model used for the NLI evaluation. Valid |
| | | choices are ``gn_model_analytic`` (see |
| | | eq. 120 from `arXiv:1209.0394 |
| | | <https://arxiv.org/abs/1209.0394>`_) and |
| | | ``ggn_spectrally_separated`` (see eq. 21 |
| | | from `arXiv:1710.02225 |
| | | <https://arxiv.org/abs/1710.02225>`_). |
+---------------------------------------------+-----------+---------------------------------------------+
| ``nli_params.computed_channels`` | (number) | The channels on which the NLI is |
| | | explicitly evaluated. |
| | | The NLI of the other channels is |
| | | interpolated using ``numpy.interp``. |
| | | In a C-band simulation with 96 channels in |
| | | a 50 GHz spacing fix-grid we recommend at |
| | | one computed channel every 20 channels. |
+---------------------------------------------+-----------+---------------------------------------------+
Span
~~~~
Span configuration is not a list (which may change in later releases) and the user can only modify the value of existing parameters:
Span configuration is performed as follows. It is not a list (which may change
in later releases) and the user can only modify the value of existing
parameters:
+-------------------------------------+-----------+---------------------------------------------+
| field | type | description |
@@ -342,7 +362,7 @@ Span configuration is not a list (which may change in later releases) and the us
| | | Interest to support high level |
| | | topologies that do not specify in line |
| | | amplification sites. For example the |
| | | CORONET_Global_Topology.xlsx defines |
| | | CORONET_Global_Topology.xls defines |
| | | links > 1000km between 2 sites: it |
| | | couldn't be simulated if these links |
| | | were not split in shorter span lengths. |
@@ -392,6 +412,7 @@ Span configuration is not a list (which may change in later releases) and the us
"type_variety": "SSMF",
"params":
{
"type_variety": "SSMF",
"length": 120.0,
"loss_coef": 0.2,
"length_units": "km",
@@ -401,29 +422,55 @@ Span configuration is not a list (which may change in later releases) and the us
}
}
SpectralInformation
~~~~~~~~~~~~~~~~~~~
ROADMs can be configured as follows. The user can only modify the value of
existing parameters:
The user can only modify the value of existing parameters.
It defines a spectrum of N identical carriers.
While the code libraries allow for different carriers and power levels, the current user parametrization only allows one carrier type and one power/channel definition.
+--------------------------+-----------+---------------------------------------------+
| field | type | description |
+==========================+===========+=============================================+
| ``target_pch_out_db`` | (number) | Auto-design sets the ROADM egress channel |
| | | power. This reflects typical control loop |
| | | algorithms that adjust ROADM losses to |
| | | equalize channels (eg coming from different |
| | | ingress direction or add ports) |
| | | This is the default value |
| | | Roadm/params/target_pch_out_db if no value |
| | | is given in the ``Roadm`` element in the |
| | | topology input description. |
| | | This default value is ignored if a |
| | | params/target_pch_out_db value is input in |
| | | the topology for a given ROADM. |
+--------------------------+-----------+---------------------------------------------+
| ``add_drop_osnr`` | (number) | OSNR contribution from the add/drop ports |
+--------------------------+-----------+---------------------------------------------+
| ``restrictions`` | (dict of | If non-empty, keys ``preamp_variety_list`` |
| | strings) | and ``booster_variety_list`` represent |
| | | list of ``type_variety`` amplifiers which |
| | | are allowed for auto-design within ROADM's |
| | | line degrees. |
| | | |
| | | If no booster should be placed on a degree, |
| | | insert a ``Fused`` node on the degree |
| | | output. |
+--------------------------+-----------+---------------------------------------------+
The ``SpectralInformation`` object can be configured as follows. The user can
only modify the value of existing parameters. It defines a spectrum of N
identical carriers. While the code libraries allow for different carriers and
power levels, the current user parametrization only allows one carrier type and
one power/channel definition.
+----------------------+-----------+-------------------------------------------+
| field | type | description |
+======================+===========+===========================================+
| ``f_min``, | (number) | In Hz. Define spectrum boundaries. Note |
| ``f_max`` | | that due to backward compatibility, the |
| | | first channel central frequency is placed |
| | | at :math:`f_{min} + spacing` and the last |
| | | one at :math:`f_{max}`. |
| ``f_min``, | (number) | In Hz. Carrier min max excursion. |
| ``f_max`` | | |
+----------------------+-----------+-------------------------------------------+
| ``baud_rate`` | (number) | In Hz. Simulated baud rate. |
+----------------------+-----------+-------------------------------------------+
| ``spacing`` | (number) | In Hz. Carrier spacing. |
+----------------------+-----------+-------------------------------------------+
| ``roll_off`` | (number) | Pure number between 0 and 1. TX signal |
| | | roll-off shape. Used by Raman-aware |
| | | simulation code. |
| ``roll_off`` | (number) | Not used. |
+----------------------+-----------+-------------------------------------------+
| ``tx_osnr`` | (number) | In dB. OSNR out from transponder. |
+----------------------+-----------+-------------------------------------------+
@@ -446,5 +493,141 @@ While the code libraries allow for different carriers and power levels, the curr
| | | power_range_db + power_dbm. |
+----------------------+-----------+-------------------------------------------+
| ``sys_margins`` | (number) | In dB. Added margin on min required |
| | | transceiver OSNR. |
| | | transceiver OSNR. |
+----------------------+-----------+-------------------------------------------+
The `transmission_main_example.py <examples/transmission_main_example.py>`_ script propagates a spectrum of channels at 32 Gbaud, 50 GHz spacing and 0 dBm/channel.
Launch power can be overridden by using the ``--power`` argument.
Spectrum information is not yet parametrized but can be modified directly in the ``eqpt_config.json`` (via the ``SpectralInformation`` -SI- structure) to accommodate any baud rate or spacing.
The number of channel is computed based on ``spacing`` and ``f_min``, ``f_max`` values.
An experimental support for Raman amplification is available:
.. code-block:: shell
$ ./examples/transmission_main_example.py \
examples/raman_edfa_example_network.json \
--sim examples/sim_params.json --show-channels
Configuration of Raman pumps (their frequencies, power and pumping direction) is done via the `RamanFiber element in the network topology <examples/raman_edfa_example_network.json>`_.
General numeric parameters for simulaiton control are provided in the `examples/sim_params.json <examples/sim_params.json>`_.
Use `examples/path_requests_run.py <examples/path_requests_run.py>`_ to run multiple optimizations as follows:
.. code-block:: shell
$ python path_requests_run.py -h
Usage: path_requests_run.py [-h] [-v] [-o OUTPUT] [network_filename] [service_filename] [eqpt_filename]
The ``network_filename`` and ``service_filename`` can be an XLS or JSON file. The ``eqpt_filename`` must be a JSON file.
To see an example of it, run:
.. code-block:: shell
$ cd examples
$ python path_requests_run.py meshTopologyExampleV2.xls meshTopologyExampleV2_services.json eqpt_config.json -o output_file.json
This program requires a list of connections to be estimated and the equipment
library. The program computes performances for the list of services (accepts
JSON or Excel format) using the same spectrum propagation modules as
``transmission_main_example.py``. Explanation on the Excel template is provided in
the `Excel_userguide.rst <Excel_userguide.rst#service-sheet>`_. Template for
the JSON format can be found here: `service-template.json
<service-template.json>`_.
Contributing
------------
``gnpy`` is looking for additional contributors, especially those with experience
planning and maintaining large-scale, real-world mesh optical networks.
To get involved, please contact Jan Kundrát
<jan.kundrat@telecominfraproject.com> or Gert Grammel <ggrammel@juniper.net>.
``gnpy`` contributions are currently limited to members of `TIP
<http://telecominfraproject.com>`_. Membership is free and open to all.
See the `Onboarding Guide
<https://github.com/Telecominfraproject/gnpy/wiki/Onboarding-Guide>`_ for
specific details on code contributions.
See `AUTHORS.rst <AUTHORS.rst>`_ for past and present contributors.
Project Background
------------------
Data Centers are built upon interchangeable, highly standardized node and
network architectures rather than a sum of isolated solutions. This also
translates to optical networking. It leads to a push in enabling multi-vendor
optical network by disaggregating HW and SW functions and focusing on
interoperability. In this paradigm, the burden of responsibility for ensuring
the performance of such disaggregated open optical systems falls on the
operators. Consequently, operators and vendors are collaborating in defining
control models that can be readily used by off-the-shelf controllers. However,
node and network models are only part of the answer. To take reasonable
decisions, controllers need to incorporate logic to simulate and assess optical
performance. Hence, a vendor-independent optical quality estimator is required.
Given its vendor-agnostic nature, such an estimator needs to be driven by a
consortium of operators, system and component suppliers.
Founded in February 2016, the Telecom Infra Project (TIP) is an
engineering-focused initiative which is operator driven, but features
collaboration across operators, suppliers, developers, integrators, and
startups with the goal of disaggregating the traditional network deployment
approach. The groups ultimate goal is to help provide better connectivity for
communities all over the world as more people come on-line and demand more
bandwidth- intensive experiences like video, virtual reality and augmented
reality.
Within TIP, the Open Optical Packet Transport (OOPT) project group is chartered
with unbundling monolithic packet-optical network technologies in order to
unlock innovation and support new, more flexible connectivity paradigms.
The key to unbundling is the ability to accurately plan and predict the
performance of optical line systems based on an accurate simulation of optical
parameters. Under that OOPT umbrella, the Physical Simulation Environment (PSE)
working group set out to disrupt the planning landscape by providing an open
source simulation model which can be used freely across multiple vendor
implementations.
.. |docs| image:: https://readthedocs.org/projects/gnpy/badge/?version=develop
:target: http://gnpy.readthedocs.io/en/develop/?badge=develop
:alt: Documentation Status
:scale: 100%
.. |build| image:: https://travis-ci.com/Telecominfraproject/oopt-gnpy.svg?branch=develop
:target: https://travis-ci.com/Telecominfraproject/oopt-gnpy
:alt: Build Status
:scale: 100%
.. |doi| image:: https://zenodo.org/badge/96894149.svg
:target: https://zenodo.org/badge/latestdoi/96894149
:alt: DOI
:scale: 100%
TIP OOPT/PSE & PSE WG Charter
-----------------------------
We believe that openly sharing ideas, specifications, and other intellectual
property is the key to maximizing innovation and reducing complexity
TIP OOPT/PSE's goal is to build an end-to-end simulation environment which
defines the network models of the optical device transfer functions and their
parameters. This environment will provide validation of the optical
performance requirements for the TIP OLS building blocks.
- The model may be approximate or complete depending on the network complexity.
Each model shall be validated against the proposed network scenario.
- The environment must be able to process network models from multiple vendors,
and also allow users to pick any implementation in an open source framework.
- The PSE will influence and benefit from the innovation of the DTC, API, and
OLS working groups.
- The PSE represents a step along the journey towards multi-layer optimization.
License
-------
``gnpy`` is distributed under a standard BSD 3-Clause License.
See `LICENSE <LICENSE>`__ for more details.

View File

@@ -1 +0,0 @@
graphviz

View File

@@ -1,59 +0,0 @@
(about-gnpy)=
# About the project
GNPy is a sponsored project of the [OOPT/PSE](https://telecominfraproject.com/open-optical-packet-transport/) working group of the [Telecom Infra Project](http://telecominfraproject.com).
There are weekly calls about our progress.
Newcomers, users and telecom operators are especially welcome there.
We encourage all interested people outside the TIP to [join the project](https://telecominfraproject.com/apply-for-membership/) and especially to [get in touch with us](https://github.com/Telecominfraproject/oopt-gnpy/discussions).
(contributing)=
## Contributing
`gnpy` is looking for additional contributors, especially those with experience planning and maintaining large-scale, real-world mesh optical networks.
To get involved, please contact [Jan Kundrát](mailto:jan.kundrat@telecominfraproject.com) or [Gert Grammel](mailto:ggrammel@juniper.net).
`gnpy` contributions are currently limited to members of [TIP](http://telecominfraproject.com).
Membership is free and open to all.
See the [Onboarding Guide](https://github.com/Telecominfraproject/gnpy/wiki/Onboarding-Guide) for specific details on code contributions, or just [upload patches to our Gerrit](https://review.gerrithub.io/Documentation/intro-gerrit-walkthrough-github.html).
Here is [what we are currently working on](https://review.gerrithub.io/q/project:Telecominfraproject/oopt-gnpy+status:open).
## Project Background
Data Centers are built upon interchangeable, highly standardized node and network architectures rather than a sum of isolated solutions.
This also translates to optical networking.
It leads to a push in enabling multi-vendor optical network by disaggregating HW and SW functions and focusing on interoperability.
In this paradigm, the burden of responsibility for ensuring the performance of such disaggregated open optical systems falls on the operators.
Consequently, operators and vendors are collaborating in defining control models that can be readily used by off-the-shelf controllers.
However, node and network models are only part of the answer.
To take reasonable decisions, controllers need to incorporate logic to simulate and assess optical performance.
Hence, a vendor-independent optical quality estimator is required.
Given its vendor-agnostic nature, such an estimator needs to be driven by a consortium of operators, system and component suppliers.
Founded in February 2016, the Telecom Infra Project (TIP) is an engineering-focused initiative which is operator driven, but features collaboration across operators, suppliers, developers, integrators, and startups with the goal of disaggregating the traditional network deployment approach.
The groups ultimate goal is to help provide better connectivity for communities all over the world as more people come on-line and demand more bandwidth-intensive experiences like video, virtual reality and augmented reality.
Within TIP, the Open Optical Packet Transport (OOPT) project group is chartered with unbundling monolithic packet-optical network technologies in order to unlock innovation and support new, more flexible connectivity paradigms.
The key to unbundling is the ability to accurately plan and predict the performance of optical line systems based on an accurate simulation of optical parameters.
Under that OOPT umbrella, the Physical Simulation Environment (PSE) working group set out to disrupt the planning landscape by providing an open source simulation model which can be used freely across multiple vendor implementations.
## TIP OOPT/PSE & PSE WG Charter
We believe that openly sharing ideas, specifications, and other intellectual property is the key to maximizing innovation and reducing complexity
TIP OOPT/PSE's goal is to build an end-to-end simulation environment which defines the network models of the optical device transfer functions and their parameters.
This environment will provide validation of the optical performance requirements for the TIP OLS building blocks.
- The model may be approximate or complete depending on the network complexity.
Each model shall be validated against the proposed network scenario.
- The environment must be able to process network models from multiple vendors, and also allow users to pick any implementation in an open source framework.
- The PSE will influence and benefit from the innovation of the DTC, API, and OLS working groups.
- The PSE represents a step along the journey towards multi-layer optimization.
License
-------
GNPy is distributed under a standard BSD 3-Clause License.

View File

@@ -1,270 +0,0 @@
.. _concepts:
Simulating networks with GNPy
=============================
Running simulations with GNPy requires three pieces of information:
- the :ref:`network topology<concepts-topology>`, which describes how the network looks like, what are the fiber lengths, what amplifiers are used, etc.,
- the :ref:`equipment library<concepts-equipment>`, which holds machine-readable datasheets of the equipment used in the network,
- the :ref:`simulation options<concepts-simulation>` holding instructions about what to simulate, and under which conditions.
.. _concepts-topology:
Network Topology
----------------
The *topology* acts as a "digital self" of the simulated network.
When given a network topology, GNPy can either run a specific simulation as-is, or it can *optimize* the topology before performing the simulation.
A network topology for GNPy is often a generic, mesh network.
This enables GNPy to take into consideration the current spectrum allocation as well as availability and resiliency considerations.
When the time comes to run a particular *propagation* of a signal and its impairments are computed, though, a linear path through the network is used.
For this purpose, the *path* through the network refers to an ordered, acyclic sequence of *nodes* that are processed.
This path is directional, and all "GNPy elements" along the path match the unidirectional part of a real-world network equipment.
.. note::
In practical terms, an amplifier in GNPy refers to an entity with a single input port and a single output port.
A real-world inline EDFA enclosed in a single chassis will be therefore represented as two GNPy-level amplifiers.
The network topology contains not just the physical topology of the network, but also references to the :ref:`equipment library<concepts-equipment>` and a set of *operating parameters* for each entity.
These parameters include the **fiber length** of each fiber, the connector **attenutation losses**, or an amplifier's specific **gain setting**.
The topology is specified via :ref:`XLS files<excel>` or via :ref:`JSON<json>`.
.. _complete-vs-incomplete:
Fully Specified vs. Partially Designed Networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Let's consider a simple triangle topology with three :abbr:`PoPs (Points of Presence)` covering three cities:
.. graphviz::
:layout: neato
:align: center
graph "High-level topology with three PoPs" {
A -- B
B -- C
C -- A
}
In the real world, each city would probably host a ROADM and some transponders:
.. graphviz::
:layout: neato
:align: center
graph "Simplified topology with transponders" {
"ROADM A" [pos="2,2!"]
"ROADM B" [pos="4,2!"]
"ROADM C" [pos="3,1!"]
"Transponder A" [shape=box, pos="0,2!"]
"Transponder B" [shape=box, pos="6,2!"]
"Transponder C" [shape=box, pos="3,0!"]
"ROADM A" -- "ROADM B"
"ROADM B" -- "ROADM C"
"ROADM C" -- "ROADM A"
"Transponder A" -- "ROADM A"
"Transponder B" -- "ROADM B"
"Transponder C" -- "ROADM C"
}
GNPy simulation works by propagating the optical signal over a sequence of elements, which means that one has to add some preamplifiers and boosters.
The amplifiers are, by definition, unidirectional, so the graph becomes quite complex:
.. _topo-roadm-preamp-booster:
.. graphviz::
:layout: neato
:align: center
digraph "Preamps and boosters are explicitly modeled in GNPy" {
"ROADM A" [pos="2,4!"]
"ROADM B" [pos="6,4!"]
"ROADM C" [pos="4,0!"]
"Transponder A" [shape=box, pos="1,5!"]
"Transponder B" [shape=box, pos="7,5!"]
"Transponder C" [shape=box, pos="4,-1!"]
"Transponder A" -> "ROADM A"
"Transponder B" -> "ROADM B"
"Transponder C" -> "ROADM C"
"ROADM A" -> "Transponder A"
"ROADM B" -> "Transponder B"
"ROADM C" -> "Transponder C"
"Booster A C" [shape=triangle, orientation=-150, fixedsize=true, width=0.5, height=0.5, pos="2.2,3.2!", color=red, label=""]
"Preamp A C" [shape=triangle, orientation=0, fixedsize=true, width=0.5, height=0.5, pos="1.5,3.0!", color=red, label=""]
"ROADM A" -> "Booster A C"
"Preamp A C" -> "ROADM A"
"Booster A B" [shape=triangle, orientation=-90, fixedsize=true, width=0.5, height=0.5, pos="3,4.3!", color=red, fontcolor=red, labelloc=b, label="\N\n\n"]
"Preamp A B" [shape=triangle, orientation=90, fixedsize=true, width=0.5, height=0.5, pos="3,3.6!", color=red, fontcolor=red, labelloc=t, label="\n \N"]
"ROADM A" -> "Booster A B"
"Preamp A B" -> "ROADM A"
"Booster C B" [shape=triangle, orientation=-30, fixedsize=true, width=0.5, height=0.5, pos="4.7,0.9!", color=red, label=""]
"Preamp C B" [shape=triangle, orientation=120, fixedsize=true, width=0.5, height=0.5, pos="5.4,0.7!", color=red, label=""]
"ROADM C" -> "Booster C B"
"Preamp C B" -> "ROADM C"
"Booster C A" [shape=triangle, orientation=30, fixedsize=true, width=0.5, height=0.5, pos="2.6,0.7!", color=red, label=""]
"Preamp C A" [shape=triangle, orientation=-30, fixedsize=true, width=0.5, height=0.5, pos="3.3,0.9!", color=red, label=""]
"ROADM C" -> "Booster C A"
"Preamp C A" -> "ROADM C"
"Booster B A" [shape=triangle, orientation=90, fixedsize=true, width=0.5, height=0.5, pos="5,3.6!", labelloc=t, color=red, fontcolor=red, label="\n\N "]
"Preamp B A" [shape=triangle, orientation=-90, fixedsize=true, width=0.5, height=0.5, pos="5,4.3!", labelloc=b, color=red, fontcolor=red, label="\N\n\n"]
"ROADM B" -> "Booster B A"
"Preamp B A" -> "ROADM B"
"Booster B C" [shape=triangle, orientation=-180, fixedsize=true, width=0.5, height=0.5, pos="6.5,3.0!", color=red, label=""]
"Preamp B C" [shape=triangle, orientation=-20, fixedsize=true, width=0.5, height=0.5, pos="5.8,3.2!", color=red, label=""]
"ROADM B" -> "Booster B C"
"Preamp B C" -> "ROADM B"
"Booster A C" -> "Preamp C A"
"Booster A B" -> "Preamp B A"
"Booster C A" -> "Preamp A C"
"Booster C B" -> "Preamp B C"
"Booster B C" -> "Preamp C B"
"Booster B A" -> "Preamp A B"
}
In many regions, the ROADMs are not placed physically close to each other, so the long-haul fiber links (:abbr:`OMS (Optical Multiplex Section)`) are split into individual spans (:abbr:`OTS (Optical Transport Section)`) by in-line amplifiers, resulting in an even more complicated topology graphs:
.. graphviz::
:layout: neato
:align: center
digraph "A subset of a real topology with inline amplifiers" {
"ROADM A" [pos="2,4!"]
"ROADM B" [pos="6,4!"]
"ROADM C" [pos="4,-3!"]
"Transponder A" [shape=box, pos="1,5!"]
"Transponder B" [shape=box, pos="7,5!"]
"Transponder C" [shape=box, pos="4,-4!"]
"Transponder A" -> "ROADM A"
"Transponder B" -> "ROADM B"
"Transponder C" -> "ROADM C"
"ROADM A" -> "Transponder A"
"ROADM B" -> "Transponder B"
"ROADM C" -> "Transponder C"
"Booster A C" [shape=triangle, orientation=-166, fixedsize=true, width=0.5, height=0.5, pos="2.2,3.2!", label=""]
"Preamp A C" [shape=triangle, orientation=0, fixedsize=true, width=0.5, height=0.5, pos="1.5,3.0!", label=""]
"ROADM A" -> "Booster A C"
"Preamp A C" -> "ROADM A"
"Booster A B" [shape=triangle, orientation=-90, fixedsize=true, width=0.5, height=0.5, pos="3,4.3!", label=""]
"Preamp A B" [shape=triangle, orientation=90, fixedsize=true, width=0.5, height=0.5, pos="3,3.6!", label=""]
"ROADM A" -> "Booster A B"
"Preamp A B" -> "ROADM A"
"Booster C B" [shape=triangle, orientation=-30, fixedsize=true, width=0.5, height=0.5, pos="4.7,-2.1!", label=""]
"Preamp C B" [shape=triangle, orientation=10, fixedsize=true, width=0.5, height=0.5, pos="5.4,-2.3!", label=""]
"ROADM C" -> "Booster C B"
"Preamp C B" -> "ROADM C"
"Booster C A" [shape=triangle, orientation=20, fixedsize=true, width=0.5, height=0.5, pos="2.6,-2.3!", label=""]
"Preamp C A" [shape=triangle, orientation=-30, fixedsize=true, width=0.5, height=0.5, pos="3.3,-2.1!", label=""]
"ROADM C" -> "Booster C A"
"Preamp C A" -> "ROADM C"
"Booster B A" [shape=triangle, orientation=90, fixedsize=true, width=0.5, height=0.5, pos="5,3.6!", label=""]
"Preamp B A" [shape=triangle, orientation=-90, fixedsize=true, width=0.5, height=0.5, pos="5,4.3!", label=""]
"ROADM B" -> "Booster B A"
"Preamp B A" -> "ROADM B"
"Booster B C" [shape=triangle, orientation=-180, fixedsize=true, width=0.5, height=0.5, pos="6.5,3.0!", label=""]
"Preamp B C" [shape=triangle, orientation=-20, fixedsize=true, width=0.5, height=0.5, pos="5.8,3.2!", label=""]
"ROADM B" -> "Booster B C"
"Preamp B C" -> "ROADM B"
"Inline A C 1" [shape=triangle, orientation=-166, fixedsize=true, width=0.5, pos="2.4,2.2!", label=" \N", color=red, fontcolor=red]
"Inline A C 2" [shape=triangle, orientation=-166, fixedsize=true, width=0.5, pos="2.6,1.2!", label=" \N", color=red, fontcolor=red]
"Inline A C 3" [shape=triangle, orientation=-166, fixedsize=true, width=0.5, pos="2.8,0.2!", label=" \N", color=red, fontcolor=red]
"Inline A C n" [shape=triangle, orientation=-166, fixedsize=true, width=0.5, pos="3.0,-1.1!", label=" \N", color=red, fontcolor=red]
"Booster A C" -> "Inline A C 1"
"Inline A C 1" -> "Inline A C 2"
"Inline A C 2" -> "Inline A C 3"
"Inline A C 3" -> "Inline A C n" [style=dotted]
"Inline A C n" -> "Preamp C A"
"Booster A B" -> "Preamp B A" [style=dotted]
"Booster C A" -> "Preamp A C" [style=dotted]
"Booster C B" -> "Preamp B C" [style=dotted]
"Booster B C" -> "Preamp C B" [style=dotted]
"Booster B A" -> "Preamp A B" [style=dotted]
}
In such networks, GNPy's autodesign features becomes very useful.
It is possible to connect ROADMs via "tentative links" which will be replaced by a sequence of actual fibers and specific amplifiers.
In other cases where the location of amplifier huts is already known, but the specific EDFA models have not yet been decided, one can put in amplifier placeholders and let GNPy assign the best amplifier.
.. _concepts-equipment:
The Equipment Library
---------------------
In order to produce an accurate simulation, GNPy needs to know the physical properties of each entity which affects the optical signal.
Entries in the equipment library correspond to actual real-world, tangible entities.
Unlike a typical :abbr:`NMS (Network Management System)`, GNPy considers not just the active :abbr:`NEs (Network Elements)` such as amplifiers and :abbr:`ROADMs (Reconfigurable Optical Add/Drop Multiplexers)`, but also the passive ones, such as the optical fiber.
As the signal propagates through the network, the largest source of optical impairments is the noise introduced from amplifiers.
An accurate description of the :abbr:`EDFA (Erbium-Doped Fiber Amplifier)` and especially its noise characteristics is required.
GNPy describes this property in terms of the **Noise Figure (NF)** of an amplifier model as a function of its operating point.
The amplifiers compensate power losses induced on the signal in the optical fiber.
The linear losses, however, are just one phenomenon of a multitude of effects that affect the signals in a long fiber run.
While a more detailed description is available :ref:`in the literature<physical-model>`, for the purpose of the equipment library, the description of the *optical fiber* comprises its **linear attenutation coefficient**, a set of parameters for the **Raman effect**, optical **dispersion**, etc.
Signals are introduced into the network via *transponders*.
The set of parameters that are required describe the physical properties of each supported *mode* of the transponder, including its **symbol rate**, spectral **width**, etc.
In the junctions of the network, *ROADMs* are used for spectrum routing.
GNPy currently does not take into consideration the spectrum filtering penalties of the :abbr:`WSSes (Wavelength Selective Switches)`, but the equipment library nonetheless contains a list of required parameters, such as the attenuation options, so that the network can be properly simulated.
.. _concepts-nf-model:
Amplifier Noise Figure Models
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One of the key parameters of an amplifier is the method to use for computing the Noise Figure (NF).
GNPy supports several different noise models with varying level of accuracy.
When in doubt, contact your vendor's technical support and ask them to :ref:`contribute their equipment descriptions<extending-edfa>` to GNPy.
The most accurate noise models describe the resulting NF of an EDFA as a third-degree polynomial.
GNPy understands polynomials as a NF-yielding function of the :ref:`gain difference from the optimal gain<ext-nf-model-polynomial-NF>`, or as a function of the input power resulting in an incremental OSNR as used in :ref:`OpenROADM inline amplifiers<ext-nf-model-polynomial-OSNR-OpenROADM>` and :ref:`OpenROADM booster/preamps in the ROADMs<ext-nf-model-noise-mask-OpenROADM>`.
For scenarios where the vendor has not yet contributed an accurate EDFA NF description to GNPy, it is possible to approximate the characteristics via an operator-focused, min-max NF model.
.. _nf-model-min-max-NF:
Min-max NF
**********
This is an operator-focused model where performance is defined by the *minimal* and *maximal NF*.
These are especially suited to model a dual-coil EDFA with a VOA in between.
In these amplifiers, the minimal NF is achieved when the EDFA operates at its maximal (and usually optimal, in terms of flatness) gain.
The worst (maximal) NF applies when the EDFA operates at its minimal gain.
This model is suitable for use when the vendor has not provided a more accurate performance description of the EDFA.
Raman Approximation
*******************
While GNPy is fully Raman-aware, under certain scenarios it is useful to be able to run a simulation without an accurate Raman description.
For these purposes the :ref:`polynomial NF<ext-nf-model-polynomial-NF>` model with :math:`\text{a} = \text{b} = \text{c} = 0`, and :math:`\text{d} = NF` can be used.
.. _concepts-simulation:
Simulation
----------
When the network model has been instantiated and the physical properties and operational settings of the actual physical devices are known, GNPy can start simulating how the signal propagate through the optical fiber.
This set of input parameters include options such as the *spectrum allocation*, i.e., the number of channels and their spacing.
Various strategies for network optimization can be provided as well.

View File

@@ -31,17 +31,8 @@ sys.path.insert(0, os.path.abspath('../'))
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.mathjax',
'sphinx.ext.githubpages',
'sphinxcontrib.bibtex',
'sphinx.ext.graphviz',
'myst_parser',
]
myst_enable_extensions = [
"deflist",
"dollarmath",
]
'sphinx.ext.mathjax',
'sphinx.ext.githubpages','sphinxcontrib.bibtex']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -57,8 +48,17 @@ master_doc = 'index'
# General information about the project.
project = 'gnpy'
copyright = '2018 - 2021, Telecom Infra Project - OOPT PSE Group'
author = 'Telecom Infra Project - OOPT PSE Group'
copyright = '2018, Telecom InfraProject - OOPT PSE Group'
author = 'Telecom InfraProject - OOPT PSE Group'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@@ -87,17 +87,8 @@ todo_include_todos = False
on_rtd = os.environ.get('READTHEDOCS') == 'True'
if on_rtd:
html_theme = 'default'
html_theme_options = {
'logo_only': True,
}
else:
html_theme = 'alabaster'
html_theme_options = {
'logo': 'images/GNPy-logo.png',
'logo_name': False,
}
html_logo = 'images/GNPy-logo.png'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
@@ -108,7 +99,7 @@ html_logo = 'images/GNPy-logo.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
@@ -157,7 +148,7 @@ latex_elements = {
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'gnpy.tex', 'gnpy Documentation',
'Telecom Infra Project - OOPT PSE Group', 'manual'),
'Telecom InfraProject - OOPT PSE Group', 'manual'),
]
@@ -182,13 +173,4 @@ texinfo_documents = [
'Miscellaneous'),
]
autodoc_default_options = {
'members': True,
'undoc-members': True,
'private-members': True,
'show-inheritance': True,
}
graphviz_output_format = 'svg'
bibtex_bibfiles = ['biblio.bib']
autodoc_default_flags = ['members', 'undoc-members', 'private-members', 'show-inheritance']

View File

@@ -1,176 +0,0 @@
.. _extending:
Extending GNPy with vendor-specific data
========================================
GNPy ships with an :ref:`equipment library<concepts-equipment>` containing machine-readable datasheets of networking equipment.
Vendors who are willing to contribute descriptions of their supported products are encouraged to `submit a patch <https://review.gerrithub.io/Documentation/intro-gerrit-walkthrough-github.html>`__ -- or just :ref:`get in touch with us directly<contributing>`.
This chapter discusses option for modeling performance of :ref:`EDFA amplifiers<extending-edfa>`, :ref:`Raman amplifiers<extending-raman>`, :ref:`transponders<extending-transponder>` and :ref:`ROADMs<extending-roadm>`.
.. _extending-edfa:
EDFAs
-----
An accurate description of the :abbr:`EDFA (Erbium-Doped Fiber Amplifier)` and especially its noise characteristics is required.
GNPy describes this property in terms of the **Noise Figure (NF)** of an amplifier model as a function of its operating point.
GNPy supports several different :ref:`noise models<concepts-nf-model>`, and vendors are encouraged to pick one which describes performance of their equipment most accurately.
.. _ext-nf-model-polynomial-NF:
Polynomial NF
*************
This model computes the NF as a function of the difference between the optimal gain and the current gain.
The NF is expressed as a third-degree polynomial:
.. math::
f(x) &= \text{a}x^3 + \text{b}x^2 + \text{c}x + \text{d}
\text{NF} &= f(G - G_\text{max})
This model can be also used for fixed-gain fixed-NF amplifiers.
In that case, use:
.. math::
a = b = c &= 0
d &= \text{NF}
.. _ext-nf-model-polynomial-OSNR-OpenROADM:
Polynomial OSNR (OpenROADM-style for inline amplifier)
******************************************************
This model is useful for amplifiers compliant to the OpenROADM specification for ILA (an in-line amplifier).
The amplifier performance is evaluated via its incremental OSNR, which is a function of the input power.
.. math::
\text{OSNR}_\text{inc}(P_\text{in}) = \text{a}P_\text{in}^3 + \text{b}P_\text{in}^2 + \text{c}P_\text{in} + \text{d}
.. _ext-nf-model-noise-mask-OpenROADM:
Noise mask (OpenROADM-style for combined preamp and booster)
************************************************************
Unlike GNPy which simluates the preamplifier and the booster separately as two amplifiers for best accuracy, the OpenROADM specification mandates a certain performance level for a combination of these two amplifiers.
For the express path, the effective noise mask comprises the preamplifier and the booster.
When terminating a channel, the same effective noise mask is mandated for a combination of the preamplifier and the drop stage.
GNPy emulates this specification via two special NF models:
- The ``openroadm_preamp`` NF model for preamplifiers.
This NF model provides all of the linear impairments to the signal, including those which are incured by the booster in a real network.
- The ``openroadm_booster`` NF model is a special "zero noise" faux amplifier in place of the booster.
.. _ext-nf-model-min-max-NF:
Min-max NF
**********
When the vendor prefers not to share the amplifier description in full detail, GNPy also supports describing the NF characteristics via the *minimal* and *maximal NF*.
This approximates a more accurate polynomial description reasonably well for some models of a dual-coil EDFA with a VOA in between.
In these amplifiers, the minimal NF is achieved when the EDFA operates at its maximal (and usually optimal, in terms of flatness) gain.
The worst (maximal) NF applies when the EDFA operates at the minimal gain.
.. _ext-nf-model-dual-stage-amplifier:
Dual-stage
**********
Dual-stage amplifier combines two distinct amplifiers.
Vendors which provide an accurate description of their preamp and booster stages separately can use the dual-stage model for an aggregate description of the whole amplifier.
.. _ext-nf-model-advanced:
Advanced Specification
**********************
The amplifier performance can be further described in terms of gain ripple, NF ripple, and the dynamic gain tilt.
When provided, the amplifier characteristic is fine-tuned as a function of carrier frequency.
.. _extending-raman:
Raman Amplifiers
----------------
An accurate simulation of Raman amplification requires knowledge of:
- the *power* and *wavelength* of all Raman pumping lasers,
- the *direction*, whether it is co-propagating or counter-propagating,
- the Raman efficiency of the fiber,
- the fiber temperature.
Under certain scenarios it is useful to be able to run a simulation without an accurate Raman description.
For these purposes, it is possible to approximate a Raman amplifier via a fixed-gain EDFA with the :ref:`polynomial NF<ext-nf-model-polynomial-NF>` model using :math:`\text{a} = \text{b} = \text{c} = 0`, and a desired effective :math:`\text{d} = NF`.
This is also useful to quickly approximate a hybrid EDFA+Raman amplifier.
.. _extending-transponder:
Transponders
------------
Since transponders are usually capable of operating in a variety of modes, these are described separately.
A *mode* usually refers to a particular performance point that is defined by a combination of the symbol rate, modulation format, and :abbr:`FEC (Forward Error Correction)`.
The following data are required for each mode:
``bit-rate``
Data bit rate, in :math:`\text{Gbits}\times s^{-1}`.
``baud-rate``
Symbol modulation rate, in :math:`\text{Gbaud}`.
``required-osnr``
Minimal allowed OSNR for the receiver.
``tx-osnr``
Initial OSNR at the transmitter's output.
``grid-spacing``
Minimal grid spacing, i.e., an effective channel spectral bandwidth.
In :math:`\text{Hz}`.
``tx-roll-off``
Roll-off parameter (:math:`\beta`) of the TX pulse shaping filter.
This assumes a raised-cosine filter.
``rx-power-min`` and ``rx-power-max``
The allowed range of power at the receiver.
In :math:`\text{dBm}`.
``cd-max``
Maximal allowed Chromatic Dispersion (CD).
In :math:`\text{ps}/\text{nm}`.
``pmd-max``
Maximal allowed Polarization Mode Dispersion (PMD).
In :math:`\text{ps}`.
``cd-penalty``
*Work-in-progress.*
Describes the increase of the requires GSNR as the :abbr:`CD (Chromatic Dispersion)` deteriorates.
``dgd-penalty``
*Work-in-progress.*
Describes the increase of the requires GSNR as the :abbr:`DGD (Differential Group Delay)` deteriorates.
``pmd-penalty``
*Work-in-progress.*
Describes the increase of the requires GSNR as the :abbr:`PMD (Polarization Mode Dispersion)` deteriorates.
GNPy does not directly track the FEC performance, so the type of chosen FEC is likely indicated in the *name* of the selected transponder mode alone.
.. _extending-roadm:
ROADMs
------
In a :abbr:`ROADM (Reconfigurable Add/Drop Multiplexer)`, GNPy simulates the impairments of the preamplifiers and boosters of line degrees :ref:`separately<topo-roadm-preamp-booster>`.
The set of parameters for each ROADM model therefore includes:
``add-drop-osnr``
OSNR penalty introduced by the Add and Drop stages of this ROADM type.
``target-channel-out-power``
Per-channel target TX power towards the egress amplifier.
Within GNPy, a ROADM is expected to attenuate any signal that enters the ROADM node to this level.
This can be overridden on a per-link in the network topology.
``pmd``
Polarization mode dispersion (PMD) penalty of the express path.
In :math:`\text{ps}`.
Provisions are in place to define the list of all allowed booster and preamplifier types.
This is useful for specifying constraints on what amplifier modules fit into ROADM chassis, and when using fully disaggregated ROADM topologies as well.

View File

@@ -1,13 +0,0 @@
``gnpy.core``
-------------
.. automodule:: gnpy.core
.. automodule:: gnpy.core.ansi_escapes
.. automodule:: gnpy.core.elements
.. automodule:: gnpy.core.equipment
.. automodule:: gnpy.core.exceptions
.. automodule:: gnpy.core.info
.. automodule:: gnpy.core.network
.. automodule:: gnpy.core.parameters
.. automodule:: gnpy.core.science_utils
.. automodule:: gnpy.core.utils

View File

@@ -1,9 +0,0 @@
``gnpy.tools``
--------------
.. automodule:: gnpy.tools
.. automodule:: gnpy.tools.cli_examples
.. automodule:: gnpy.tools.convert
.. automodule:: gnpy.tools.json_io
.. automodule:: gnpy.tools.plots
.. automodule:: gnpy.tools.service_sheet

View File

@@ -1,6 +0,0 @@
``gnpy.topology``
-----------------
.. automodule:: gnpy.topology
.. automodule:: gnpy.topology.request
.. automodule:: gnpy.topology.spectrum_assignment

View File

@@ -1,14 +0,0 @@
***************************
API Reference Documentation
***************************
``gnpy`` package
================
.. automodule:: gnpy
.. toctree::
gnpy-api-core
gnpy-api-topology
gnpy-api-tools

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

View File

@@ -1,22 +1,33 @@
GNPy: Optical Route Planning Library
=====================================================================
.. gnpy documentation master file, created by
sphinx-quickstart on Mon Dec 18 14:41:01 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
`GNPy <http://github.com/telecominfraproject/gnpy>`_ is an open-source,
community-developed library for building route planning and optimization tools
in real-world mesh optical networks. It is based on the Gaussian Noise Model.
Welcome to gnpy's documentation!
================================
**gnpy is an open-source, community-developed library for building route planning
and optimization tools in real-world mesh optical networks.**
`gnpy <http://github.com/telecominfraproject/gnpy>`_ is:
- a sponsored project of the `OOPT/PSE <http://telecominfraproject.com/project-groups-2/backhaul-projects/open-optical-packet-transport/>`_ working group of the `Telecom Infra Project <http://telecominfraproject.com>`_.
- fully community-driven, fully open source library
- driven by a consortium of operators, vendors, and academic researchers
- intended for rapid development of production-grade route planning tools
- easily extensible to include custom network elements
- performant to the scale of real-world mesh optical networks
Documentation
=============
The following pages are meant to describe specific implementation details and
modeling assumptions behind gnpy.
.. toctree::
:maxdepth: 4
:maxdepth: 2
intro
concepts
install
json
excel
extending
about-project
model
gnpy-api
Indices and tables
==================
@@ -25,3 +36,67 @@ Indices and tables
* :ref:`modindex`
* :ref:`search`
Contributors in alphabetical order
==================================
+----------+------------+-----------------------+--------------------------------------+
| Name | Surname | Affiliation | Contact |
+==========+============+=======================+======================================+
| Alessio | Ferrari | Politecnico di Torino | alessio.ferrari@polito.it |
+----------+------------+-----------------------+--------------------------------------+
| Anders | Lindgren | Telia Company | Anders.X.Lindgren@teliacompany.com |
+----------+------------+-----------------------+--------------------------------------+
| Andrea | d'Amico | Politecnico di Torino | andrea.damico@polito.it |
+----------+------------+-----------------------+--------------------------------------+
| Brian | Taylor | Facebook | briantaylor@fb.com |
+----------+------------+-----------------------+--------------------------------------+
| David | Boertjes | Ciena | dboertje@ciena.com |
+----------+------------+-----------------------+--------------------------------------+
| Diego | Landa | Facebook | dlanda@fb.com |
+----------+------------+-----------------------+--------------------------------------+
| Esther | Le Rouzic | Orange | esther.lerouzic@orange.com |
+----------+------------+-----------------------+--------------------------------------+
| Gabriele | Galimberti | Cisco | ggalimbe@cisco.com |
+----------+------------+-----------------------+--------------------------------------+
| Gert | Grammel | Juniper Networks | ggrammel@juniper.net |
+----------+------------+-----------------------+--------------------------------------+
| Gilad | Goldfarb | Facebook | giladg@fb.com |
+----------+------------+-----------------------+--------------------------------------+
| James | Powell | Telecom Infra Project | james.powell@telecominfraproject.com |
+----------+------------+-----------------------+--------------------------------------+
| Jan | Kundrát | Telecom Infra Project | jan.kundrat@telecominfraproject.com |
+----------+------------+-----------------------+--------------------------------------+
| Jeanluc | Augé | Orange | jeanluc.auge@orange.com |
+----------+------------+-----------------------+--------------------------------------+
| Jonas | Mårtensson | RISE Research Sweden | jonas.martensson@ri.se |
+----------+------------+-----------------------+--------------------------------------+
| Mattia | Cantono | Politecnico di Torino | mattia.cantono@polito.it |
+----------+------------+-----------------------+--------------------------------------+
| Miguel | Garrich | University Catalunya | miquel.garrich@upct.es |
+----------+------------+-----------------------+--------------------------------------+
| Raj | Nagarajan | Lumentum | raj.nagarajan@lumentum.com |
+----------+------------+-----------------------+--------------------------------------+
| Roberts | Miculens | Lattelecom | roberts.miculens@lattelecom.lv |
+----------+------------+-----------------------+--------------------------------------+
| Shengxiang | Zhu | University of Arizona | szhu@email.arizona.edu |
+----------+------------+-----------------------+--------------------------------------+
| Stefan | Melin | Telia Company | Stefan.Melin@teliacompany.com |
+----------+------------+-----------------------+--------------------------------------+
| Vittorio | Curri | Politecnico di Torino | vittorio.curri@polito.it |
+----------+------------+-----------------------+--------------------------------------+
| Xufeng | Liu | Jabil | xufeng_liu@jabil.com |
+----------+------------+-----------------------+--------------------------------------+
--------------
- Goal is to build an end-to-end simulation environment which defines the
network models of the optical device transfer functions and their parameters.
This environment will provide validation of the optical performance
requirements for the TIP OLS building blocks.
- The model may be approximate or complete depending on the network complexity.
Each model shall be validated against the proposed network scenario.
- The environment must be able to process network models from multiple vendors,
and also allow users to pick any implementation in an open source framework.
- The PSE will influence and benefit from the innovation of the DTC, API, and
OLS working groups.
- The PSE represents a step along the journey towards multi-layer optimization.

View File

@@ -1,111 +0,0 @@
Installing GNPy
---------------
There are several methods on how to obtain GNPy.
The easiest option for a non-developer is probably going via our :ref:`Docker images<install-docker>`.
Developers are encouraged to install the :ref:`Python package in the same way as any other Python package<install-pip>`.
Note that this needs a :ref:`working installation of Python<install-python>`, for example :ref:`via Anaconda<install-anaconda>`.
.. _install-docker:
Using prebuilt Docker images
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Our `Docker images <https://hub.docker.com/r/telecominfraproject/oopt-gnpy>`_ contain everything needed to run all examples from this guide.
Docker transparently fetches the image over the network upon first use.
On Linux and Mac, run:
.. code-block:: shell-session
$ docker run -it --rm --volume $(pwd):/shared telecominfraproject/oopt-gnpy
root@bea050f186f7:/shared/example-data#
On Windows, launch from Powershell as:
.. code-block:: console
PS C:\> docker run -it --rm --volume ${PWD}:/shared telecominfraproject/oopt-gnpy
root@89784e577d44:/shared/example-data#
In both cases, a directory named ``example-data/`` will appear in your current working directory.
GNPy automaticallly populates it with example files from the current release.
Remove that directory if you want to start from scratch.
.. _install-python:
Using Python on your computer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Note**: `gnpy` supports Python 3 only. Python 2 is not supported.
`gnpy` requires Python ≥3.8
**Note**: the `gnpy` maintainers strongly recommend the use of Anaconda for
managing dependencies.
It is recommended that you use a "virtual environment" when installing `gnpy`.
Do not install `gnpy` on your system Python.
.. _install-anaconda:
We recommend the use of the `Anaconda Python distribution <https://www.anaconda.com/download>`_ which comes with many scientific computing
dependencies pre-installed. Anaconda creates a base "virtual environment" for
you automatically. You can also create and manage your ``conda`` "virtual
environments" yourself (see:
https://conda.io/docs/user-guide/tasks/manage-environments.html)
To activate your Anaconda virtual environment, you may need to do the
following:
.. code-block:: shell-session
$ source /path/to/anaconda/bin/activate # activate Anaconda base environment
(base) $ # note the change to the prompt
You can check which Anaconda environment you are using with:
.. code-block:: shell-session
(base) $ conda env list # list all environments
# conda environments:
#
base * /src/install/anaconda3
(base) $ echo $CONDA_DEFAULT_ENV # show default environment
base
You can check your version of Python with the following. If you are using
Anaconda's Python 3, you should see similar output as below. Your results may
be slightly different depending on your Anaconda installation path and the
exact version of Python you are using.
.. code-block:: shell-session
$ which python # check which Python executable is used
/path/to/anaconda/bin/python
$ python -V # check your Python version
Python 3.8.0 :: Anaconda, Inc.
.. _install-pip:
Installing the Python package
*****************************
From within your Anaconda Python 3 environment, you can clone the master branch
of the `gnpy` repo and install it with:
.. code-block:: shell-session
$ git clone https://github.com/Telecominfraproject/oopt-gnpy # clone the repo
$ cd oopt-gnpy
$ pip install --editable . # note the trailing dot
To test that `gnpy` was successfully installed, you can run this command. If it
executes without a ``ModuleNotFoundError``, you have successfully installed
`gnpy`.
.. code-block:: shell-session
$ python -c 'import gnpy' # attempt to import gnpy
$ pytest # run tests

View File

@@ -1,94 +0,0 @@
.. _intro:
Introduction
============
``gnpy`` is a library for building route planning and optimization tools.
It ships with a number of example programs. Release versions will ship with
fully-functional programs.
**Note**: *If you are a network operator or involved in route planning and
optimization for your organization, please contact project maintainer Jan
Kundrát <jan.kundrat@telecominfraproject.com>. gnpy is looking for users with
specific, delineated use cases to drive requirements for future
development.*
This example demonstrates how GNPy can be used to check the expected SNR at the end of the line by varying the channel input power:
.. image:: https://telecominfraproject.github.io/oopt-gnpy/docs/images/transmission_main_example.svg
:width: 100%
:align: left
:alt: Running a simple simulation example
By default, this script operates on a single span network defined in
`gnpy/example-data/edfa_example_network.json <gnpy/example-data/edfa_example_network.json>`_
You can specify a different network at the command line as follows. For
example, to use the CORONET Global network defined in
`gnpy/example-data/CORONET_Global_Topology.json <gnpy/example-data/CORONET_Global_Topology.json>`_:
.. code-block:: shell-session
$ gnpy-transmission-example $(gnpy-example-data)/CORONET_Global_Topology.json
It is also possible to use an Excel file input (for example
`gnpy/example-data/CORONET_Global_Topology.xls <gnpy/example-data/CORONET_Global_Topology.xls>`_).
The Excel file will be processed into a JSON file with the same prefix.
Further details about the Excel data structure are available `in the documentation <docs/excel.rst>`__.
The main transmission example will calculate the average signal OSNR and SNR
across network elements (transceiver, ROADMs, fibers, and amplifiers)
between two transceivers selected by the user. Additional details are provided by doing ``gnpy-transmission-example -h``. (By default, for the CORONET Global
network, it will show the transmission of spectral information between Abilene and Albany)
This script calculates the average signal OSNR = |OSNR| and SNR = |SNR|.
.. |OSNR| replace:: P\ :sub:`ch`\ /P\ :sub:`ase`
.. |SNR| replace:: P\ :sub:`ch`\ /(P\ :sub:`nli`\ +\ P\ :sub:`ase`)
|Pase| is the amplified spontaneous emission noise, and |Pnli| the non-linear
interference noise.
.. |Pase| replace:: P\ :sub:`ase`
.. |Pnli| replace:: P\ :sub:`nli`
Further Instructions for Use
----------------------------
Simulations are driven by a set of `JSON <docs/json.rst>`__ or `XLS <docs/excel.rst>`__ files.
The ``gnpy-transmission-example`` script propagates a spectrum of channels at 32 Gbaud, 50 GHz spacing and 0 dBm/channel.
Launch power can be overridden by using the ``--power`` argument.
Spectrum information is not yet parametrized but can be modified directly in the ``eqpt_config.json`` (via the ``SpectralInformation`` -SI- structure) to accommodate any baud rate or spacing.
The number of channel is computed based on ``spacing`` and ``f_min``, ``f_max`` values.
An experimental support for Raman amplification is available:
.. code-block:: shell-session
$ gnpy-transmission-example \
$(gnpy-example-data)/raman_edfa_example_network.json \
--sim $(gnpy-example-data)/sim_params.json --show-channels
Configuration of Raman pumps (their frequencies, power and pumping direction) is done via the `RamanFiber element in the network topology <gnpy/example-data/raman_edfa_example_network.json>`_.
General numeric parameters for simulation control are provided in the `gnpy/example-data/sim_params.json <gnpy/example-data/sim_params.json>`_.
Use ``gnpy-path-request`` to request several paths at once:
.. code-block:: shell-session
$ cd $(gnpy-example-data)
$ gnpy-path-request -o output_file.json \
meshTopologyExampleV2.xls meshTopologyExampleV2_services.json
This program operates on a network topology (`JSON <docs/json.rst>`__ or `Excel <docs/excel.rst>`__ format), processing the list of service requests (JSON or XLS again).
The service requests and reply formats are based on the `draft-ietf-teas-yang-path-computation-01 <https://tools.ietf.org/html/draft-ietf-teas-yang-path-computation-01>`__ with custom extensions (e.g., for transponder modes).
An example of the JSON input is provided in file `service-template.json`, while results are shown in `path_result_template.json`.
Important note: ``gnpy-path-request`` is not a network dimensionning tool: each service does not reserve spectrum, or occupy ressources such as transponders. It only computes path feasibility assuming the spectrum (between defined frequencies) is loaded with "nb of channels" spaced by "spacing" values as specified in the system parameters input in the service file, each cannel having the same characteristics in terms of baudrate, format,... as the service transponder. The transceiver element acts as a "logical starting/stopping point" for the spectral information propagation. At that point it is not meant to represent the capacity of add drop ports.
As a result transponder type is not part of the network info. it is related to the list of services requests.
The current version includes a spectrum assigment features that enables to compute a candidate spectrum assignment for each service based on a first fit policy. Spectrum is assigned based on service specified spacing value, path_bandwidth value and selected mode for the transceiver. This spectrum assignment includes a basic capacity planning capability so that the spectrum resource is limited by the frequency min and max values defined for the links. If the requested services reach the link spectrum capacity, additional services feasibility are computed but marked as blocked due to spectrum reason.
OpenROADM networks can be simulated via ``gnpy/example-data/eqpt_config_openroadm_*.json`` -- see ``gnpy/example-data/Sweden_OpenROADM*_example_network.json`` as an example.

View File

@@ -1,7 +1,5 @@
.. _physical-model:
Physical Model used in GNPy
===========================
The QoT estimation in the PSE framework of TIP-OOPT
=======================================================
QoT-E including ASE noise and NLI accumulation
----------------------------------------------
@@ -145,4 +143,4 @@ Raman Scattering in order to give a proper estimation for all channels
:cite:`cantono2018modeling`. This will be the main upgrade required within the
PSE framework.
.. bibliography::
.. bibliography:: biblio.bib

View File

@@ -1,7 +0,0 @@
alabaster>=0.7.12,<1
docutils>=0.17.1,<1
myst-parser>=0.16.1,<1
Pygments>=2.11.2,<3
rstcheck
Sphinx>=4.4.0,<5
sphinxcontrib-bibtex>=2.4.1,<3

94
docs/source/gnpy.core.rst Normal file
View File

@@ -0,0 +1,94 @@
gnpy\.core package
==================
Submodules
----------
gnpy\.core\.ansi_escapes module
-------------------------------
.. automodule:: gnpy.core.ansi_escapes
:members:
:undoc-members:
:show-inheritance:
gnpy\.core\.convert module
--------------------------
.. automodule:: gnpy.core.convert
:members:
:undoc-members:
:show-inheritance:
gnpy\.core\.elements module
---------------------------
.. automodule:: gnpy.core.elements
gnpy\.core\.equipment module
----------------------------
.. automodule:: gnpy.core.equipment
:members:
:undoc-members:
:show-inheritance:
gnpy\.core\.exceptions module
-----------------------------
.. automodule:: gnpy.core.exceptions
:members:
:undoc-members:
:show-inheritance:
gnpy\.core\.execute module
--------------------------
.. automodule:: gnpy.core.execute
gnpy\.core\.info module
-----------------------
.. automodule:: gnpy.core.info
gnpy\.core\.network module
--------------------------
.. automodule:: gnpy.core.network
gnpy\.core\.node module
-----------------------
.. automodule:: gnpy.core.node
gnpy\.core\.request module
--------------------------
.. automodule:: gnpy.core.request
:members:
:undoc-members:
:show-inheritance:
gnpy\.core\.service_sheet module
--------------------------------
.. automodule:: gnpy.core.service_sheet
:members:
:undoc-members:
:show-inheritance:
gnpy\.core\.units module
------------------------
.. automodule:: gnpy.core.units
gnpy\.core\.utils module
------------------------
.. automodule:: gnpy.core.utils
Module contents
---------------
.. automodule:: gnpy.core

14
docs/source/gnpy.rst Normal file
View File

@@ -0,0 +1,14 @@
gnpy package
============
Subpackages
-----------
.. toctree::
gnpy.core
Module contents
---------------
.. automodule:: gnpy

7
docs/source/modules.rst Normal file
View File

@@ -0,0 +1,7 @@
gnpy
====
.. toctree::
:maxdepth: 4
gnpy

View File

@@ -0,0 +1,124 @@
{ "Edfa":[
{
"type_variety": "fixed27",
"type_def": "fixed_gain",
"gain_flatmax": 27,
"gain_min": 27,
"p_max": 21,
"nf0": 5.5,
"allowed_for_design": false
},
{
"type_variety": "fixed22",
"type_def": "fixed_gain",
"gain_flatmax": 22,
"gain_min": 22,
"p_max": 21,
"nf0": 5.5,
"allowed_for_design": false
}
],
"Fiber":[{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"gamma": 0.00127
},
{
"type_variety": "NZDF",
"dispersion": 0.5e-05,
"gamma": 0.00146
},
{
"type_variety": "LOF",
"dispersion": 2.2e-05,
"gamma": 0.000843
}
],
"Span":[{
"power_mode": false,
"delta_power_range_db": [-2,3,0.5],
"max_fiber_lineic_loss_for_raman": 0.25,
"target_extended_gain": 2.5,
"max_length": 150,
"length_units": "km",
"max_loss": 28,
"padding": 10,
"EOL": 0,
"con_in": 0,
"con_out": 0
}
],
"Roadm":[{
"target_pch_out_db": -25,
"add_drop_osnr": 30.00,
"restrictions": {
"preamp_variety_list":[],
"booster_variety_list":[]
}
}],
"SI":[{
"f_min": 191.6e12,
"baud_rate": 32e9,
"f_max":195.1e12,
"spacing": 50e9,
"power_dbm": 0,
"power_range_db": [0,0,1],
"roll_off": 0.15,
"tx_osnr": 40,
"sys_margins": 2
}],
"Transceiver":[
{
"type_variety": "Cassini",
"frequency":{
"min": 191.35e12,
"max": 196.1e12
},
"mode":[
{
"format": "dp-qpsk",
"baud_rate": 32e9,
"OSNR": 11,
"bit_rate": 100e9,
"roll_off": 0.15,
"tx_osnr": 40,
"min_spacing": 37.5e9,
"cost":1
},
{
"format": "16-qam",
"baud_rate": 66e9,
"OSNR": 15,
"bit_rate": 200e9,
"roll_off": 0.15,
"tx_osnr": 40,
"min_spacing": 75e9,
"cost":1
}
]
},
{
"type_variety": "Voyager",
"frequency":{
"min": 191.35e12,
"max": 196.1e12
},
"mode":[
{
"format": "mode 1",
"baud_rate": 32e9,
"OSNR": 12,
"bit_rate": 100e9,
"roll_off": 0.15,
"tx_osnr": 40,
"min_spacing": 37.5e9,
"cost":1
}
]
}
]
}

View File

@@ -0,0 +1,67 @@
{
"path-request": [
{
"request-id": "first",
"source": "netconf:10.0.254.93:830",
"destination": "netconf:10.0.254.94:830",
"src-tp-id": "trx-Amsterdam",
"dst-tp-id": "trx-Bremen",
"bidirectional": true,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Cassini",
"trx_mode": null,
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 100000000000.0
}
}
},
{
"request-id": "second",
"source": "netconf:10.0.254.93:830",
"destination": "netconf:10.0.254.94:830",
"src-tp-id": "trx-Amsterdam",
"dst-tp-id": "trx-Bremen",
"bidirectional": true,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Cassini",
"trx_mode": null,
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 100000000000.0
}
}
}
],
"synchronization": [
{
"synchronization-id": "some redundancy please",
"svec": {
"relaxable": "false",
"disjointness": "node link",
"request-id-number": [
"first",
"second"
]
}
}
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,179 @@
# How many nodes in the ring topology? Up to eight is supported, then I ran out of cities..
HOW_MANY = 3
# city names
ALL_CITIES = [
'Amsterdam',
'Bremen',
'Cologne',
'Dueseldorf',
'Eindhoven',
'Frankfurt',
'Ghent',
'Hague',
]
# end of configurable parameters
J = {
"elements": [],
"connections": [],
}
def unidir_join(a, b):
global J
J["connections"].append(
{"from_node": a, "to_node": b}
)
def mk_edfa(name, gain, voa=0.0):
global J
J["elements"].append(
{"uid": name, "type": "Edfa", "type_variety": f"fixed{gain}", "operational": {"gain_target": gain, "out_voa": voa}}
)
def add_att(a, b, att):
global J
if att > 0:
uid = f"att-({a})-({b})"
else:
uid = f"splice-({a})-({b})"
J["elements"].append(
{"uid": uid, "type": "Fused", "params": {"loss": att}},
)
unidir_join(a, uid)
unidir_join(uid, b)
return uid
def build_fiber(city1, city2):
global J
J["elements"].append(
{
"uid": f"fiber-{city1}-{city2}",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"length": 50,
"length_units": "km",
"loss_coef": 0.2,
"con_in": 1.5,
"con_out": 1.5,
}
}
)
def unidir_patch(a, b):
global J
uid = f"patch-({a})-({b})"
J["elements"].append(
{
"uid": uid,
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"length": 0,
"length_units": "km",
"loss_coef": 0.2,
"con_in": 0.5,
"con_out": 0.5,
}
}
)
add_att(a, uid, 0.0)
add_att(uid, b, 0.0)
for CITY in (ALL_CITIES[x] for x in range(0, HOW_MANY)):
J["elements"].append(
{"uid": f"trx-{CITY}", "type": "Transceiver"}
)
target_pwr = [
{"to_node": f"trx-{CITY}", "target_pch_out_db": -25},
{"to_node": f"splice-(roadm-{CITY}-AD)-(patch-(roadm-{CITY}-AD)-(roadm-{CITY}-L1))", "target_pch_out_db": -12},
{"to_node": f"splice-(roadm-{CITY}-AD)-(patch-(roadm-{CITY}-AD)-(roadm-{CITY}-L2))", "target_pch_out_db": -12},
]
J["elements"].append(
{"uid": f"roadm-{CITY}-AD", "type": "Roadm", "params": {"target_pch_out_db": -2.0, "per_degree_target_pch_out_db": target_pwr}}
)
unidir_join(f"trx-{CITY}", f"roadm-{CITY}-AD")
unidir_join(f"roadm-{CITY}-AD", f"trx-{CITY}")
for n in (1,2):
target_pwr = [
{"to_node": f"roadm-{CITY}-L{n}-booster", "target_pch_out_db": -23},
{"to_node": f"splice-(roadm-{CITY}-L{n})-(patch-(roadm-{CITY}-L{n})-(roadm-{CITY}-AD))", "target_pch_out_db": -12},
]
for m in (1,2):
if m == n:
continue
target_pwr.append(
{"to_node": f"splice-(roadm-{CITY}-L{n})-(patch-(roadm-{CITY}-L{n})-(roadm-{CITY}-L{m}))", "target_pch_out_db": -12},
)
J["elements"].append(
{"uid": f"roadm-{CITY}-L{n}", "type": "Roadm", "params": {"target_pch_out_db": -23.0, "per_degree_target_pch_out_db": target_pwr}}
)
mk_edfa(f"roadm-{CITY}-L{n}-booster", 22)
mk_edfa(f"roadm-{CITY}-L{n}-preamp", 27)
unidir_join(f"roadm-{CITY}-L{n}", f"roadm-{CITY}-L{n}-booster")
unidir_join(f"roadm-{CITY}-L{n}-preamp", f"roadm-{CITY}-L{n}")
unidir_patch(f"roadm-{CITY}-AD", f"roadm-{CITY}-L{n}")
unidir_patch(f"roadm-{CITY}-L{n}", f"roadm-{CITY}-AD")
for m in (1,2):
if m == n:
continue
#add_att(f"roadm-{CITY}-L{n}", f"roadm-{CITY}-L{m}", 22)
unidir_patch(f"roadm-{CITY}-L{n}", f"roadm-{CITY}-L{m}")
for city1, city2 in ((ALL_CITIES[i], ALL_CITIES[i + 1] if i < HOW_MANY - 1 else ALL_CITIES[0]) for i in range(0, HOW_MANY)):
build_fiber(city1, city2)
unidir_join(f"roadm-{city1}-L1-booster", f"fiber-{city1}-{city2}")
unidir_join(f"fiber-{city1}-{city2}", f"roadm-{city2}-L2-preamp")
build_fiber(city2, city1)
unidir_join(f"roadm-{city2}-L2-booster", f"fiber-{city2}-{city1}")
unidir_join(f"fiber-{city2}-{city1}", f"roadm-{city1}-L1-preamp")
for _, E in enumerate(J["elements"]):
uid = E["uid"]
if uid.startswith("roadm-") and (uid.endswith("-L1-booster") or uid.endswith("-L2-booster")):
E["operational"]["out_voa"] = 12.0
#if uid.endswith("-AD-add"):
# E["operational"]["out_voa"] = 21
translate = {
#"trx-Amsterdam": "10.0.254.93",
#"trx-Bremen": "10.0.254.94",
"trx-Amsterdam": "10.0.254.76",
"trx-Bremen": "10.0.254.77",
# Amsterdam A/D: coherent-v9u
"roadm-Amsterdam-AD": "10.0.254.107",
# Bremen A/D: -spi
"roadm-Bremen-AD": "10.0.254.225",
# Amsterdam -> Bremen ...QR79
"roadm-Amsterdam-L1": "10.0.254.78",
# Bremen -> Amsterdam ...QCP9
"roadm-Bremen-L2": "10.0.254.102",
# Bremen -> Cologne ...WKP
"roadm-Bremen-L1": "10.0.254.100",
# Cologne -> Bremen ...QLK6
"roadm-Cologne-L2": "10.0.254.104",
# Cologne -> Amsterdam ...TQQ
"roadm-Cologne-L1": "10.0.254.99",
# Amsterdam -> Cologne ...Q7JS
"roadm-Amsterdam-L2": "10.0.254.79",
# spare Line/Degree ...QC8B
"spare-line-degree": "10.0.254.101",
# spare Add/Drop: ...NNN
"spare-add-drop": "10.0.254.228",
}
import json
s = json.dumps(J, indent=2)
for (old, new) in translate.items():
s = s.replace(f'"{old}"', f'"netconf:{new}:830"')
print(s)

View File

@@ -1,13 +1,13 @@
{
"nf_fit_coeff": [
"nf_fit_coeff": [
0.0008,
0.0272,
-0.2249,
6.4902
],
"f_min": 191.4e12,
"f_max": 196.1e12,
"nf_ripple": [
],
"f_min": 191.35e12,
"f_max": 196.1e12,
"nf_ripple": [
0.0,
0.0,
0.0,
@@ -58,103 +58,103 @@
0.0
],
"gain_ripple": [
-0.15656302345061,
-0.22244242043552,
-0.25188965661642,
-0.23575900335007,
-0.20897508375209,
-0.19440221943049,
-0.18324644053602,
-0.18053287269681,
-0.17113588777219,
-0.15460322445561,
-0.13550774706866,
-0.10606051088777,
-0.0765630234506,
-0.04962835008375,
-0.01319618927973,
0.01027114740367,
0.03378873534338,
0.04961788107202,
0.04494451423784,
0.0399193886097,
0.01584903685091,
-0.00420121440538,
-0.01847257118928,
-0.02475397822447,
-0.01053287269681,
0.01509526800668,
0.05921587102177,
0.1191656197655,
0.18147717755444,
0.23579878559464,
0.26941687604691,
0.27836159966498,
0.26956762981574,
0.23826109715241,
0.18936662479061,
0.1204721524288,
0.0453465242881,
-0.00877407872698,
-0.02199015912898,
0.00107516750419,
0.02795958961474,
0.02740682579566,
-0.01028161641541,
-0.05982935510889,
-0.06701528475711,
0.00223094639866,
0.15017064489112,
0.14157768006701,
0.15017064489112
0.00223094639866,
-0.06701528475711,
-0.05982935510889,
-0.01028161641541,
0.02740682579566,
0.02795958961474,
0.00107516750419,
-0.02199015912898,
-0.00877407872698,
0.0453465242881,
0.1204721524288,
0.18936662479061,
0.23826109715241,
0.26956762981574,
0.27836159966498,
0.26941687604691,
0.23579878559464,
0.18147717755444,
0.1191656197655,
0.05921587102177,
0.01509526800668,
-0.01053287269681,
-0.02475397822447,
-0.01847257118928,
-0.00420121440538,
0.01584903685091,
0.0399193886097,
0.04494451423784,
0.04961788107202,
0.03378873534338,
0.01027114740367,
-0.01319618927973,
-0.04962835008375,
-0.0765630234506,
-0.10606051088777,
-0.13550774706866,
-0.15460322445561,
-0.17113588777219,
-0.18053287269681,
-0.18324644053602,
-0.19440221943049,
-0.20897508375209,
-0.23575900335007,
-0.25188965661642,
-0.22244242043552,
-0.15656302345061
],
"dgt": [
1.0,
1.03941448941778,
1.07773189112355,
1.11575888725852,
1.15209185089701,
1.18632744096844,
1.21911100318577,
1.24931318255134,
1.27657903892303,
1.30069883494415,
1.32210817897091,
1.3405812000038,
1.35690844654118,
1.3710092503689,
1.38430337205545,
1.3966294751726,
1.40864903907609,
1.42089447397912,
1.43476940680732,
1.44977369463316,
1.46637521309853,
1.48420288841848,
1.50335352244996,
1.5242627235492,
1.54578500307573,
1.56750088631614,
1.58973304612691,
1.61073904908309,
1.63068023161292,
1.64799163036252,
1.66286684904577,
1.6761448370895,
1.68845480656382,
1.70379790088896,
1.72461030013125,
1.75428006928365,
1.79748596476494,
1.85543800978691,
1.92915262384742,
2.01414465424155,
2.10336369905543,
2.19013043016015,
2.26678136721453,
2.33147727493671,
2.38192717604575,
2.41879254989742,
2.4553191172498,
2.44342862248888,
2.4553191172498
2.41879254989742,
2.38192717604575,
2.33147727493671,
2.26678136721453,
2.19013043016015,
2.10336369905543,
2.01414465424155,
1.92915262384742,
1.85543800978691,
1.79748596476494,
1.75428006928365,
1.72461030013125,
1.70379790088896,
1.68845480656382,
1.6761448370895,
1.66286684904577,
1.64799163036252,
1.63068023161292,
1.61073904908309,
1.58973304612691,
1.56750088631614,
1.54578500307573,
1.5242627235492,
1.50335352244996,
1.48420288841848,
1.46637521309853,
1.44977369463316,
1.43476940680732,
1.42089447397912,
1.40864903907609,
1.3966294751726,
1.38430337205545,
1.3710092503689,
1.35690844654118,
1.3405812000038,
1.32210817897091,
1.30069883494415,
1.27657903892303,
1.24931318255134,
1.21911100318577,
1.18632744096844,
1.15209185089701,
1.11575888725852,
1.07773189112355,
1.03941448941778,
1.0
]
}

0
examples/__init__.py Normal file
View File

View File

@@ -11,22 +11,20 @@ If not present in the "Nodes" sheet, the "Type" column will be implicitly
determined based on the topology.
"""
from xlrd import open_workbook
try:
from xlrd import open_workbook
except ModuleNotFoundError:
exit('Required: `pip install xlrd`')
from argparse import ArgumentParser
PARSER = ArgumentParser()
PARSER.add_argument('workbook', nargs='?', default='meshTopologyExampleV2.xls',
help='create the mandatory columns in Eqpt sheet')
def ALL_ROWS(sh, start=0):
return (sh.row(x) for x in range(start, sh.nrows))
ALL_ROWS = lambda sh, start=0: (sh.row(x) for x in range(start, sh.nrows))
class Node:
""" Node element contains uid, list of connected nodes and eqpt type
"""
def __init__(self, uid, to_node):
self.uid = uid
self.to_node = to_node
@@ -38,7 +36,6 @@ class Node:
def __str__(self):
return f'uid {self.uid} \nto_node {[node for node in self.to_node]}\neqpt {self.eqpt}\n'
def read_excel(input_filename):
""" read excel Nodes and Links sheets and create a dict of nodes with
their to_nodes and type of eqpt
@@ -76,7 +73,6 @@ def read_excel(input_filename):
exit()
return nodes
def create_eqt_template(nodes, input_filename):
""" writes list of node A node Z corresponding to Nodes and Links sheets in order
to help user populating Eqpt
@@ -89,6 +85,7 @@ def create_eqt_template(nodes, input_filename):
\nNode A \tNode Z \tamp type \tatt_in \tamp gain \ttilt \tatt_out\
amp type \tatt_in \tamp gain \ttilt \tatt_out\n')
for node in nodes.values():
if node.eqpt == 'ILA':
my_file.write(f'{node.uid}\t{node.to_node[0]}\n')
@@ -96,8 +93,8 @@ def create_eqt_template(nodes, input_filename):
for to_node in node.to_node:
my_file.write(f'{node.uid}\t{to_node}\n')
print(f'File {output_filename} successfully created with Node A - Node Z entries for Eqpt sheet in excel file.')
print(f'File {output_filename} successfully created with Node A - Node Z ' +
' entries for Eqpt sheet in excel file.')
if __name__ == '__main__':
ARGS = PARSER.parse_args()

View File

@@ -1,106 +1,296 @@
{
"nf_ripple": [
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
"gain_ripple": [
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0
],
"dgt": [
1.0,
1.017807767853702,
1.0356155337864215,
1.0534217504465226,
1.0712204022764056,
1.0895983485572227,
1.108555289615659,
1.1280891949729075,
1.1476135933863398,
1.1672278304018044,
1.1869318618366975,
1.2067249615595257,
1.2264996957264114,
1.2428104897182262,
1.2556591482982988,
1.2650555289898042,
1.2744470198196236,
1.2838336236692311,
1.2932153453410835,
1.3040618749785347,
1.316383926863083,
1.3301807335621048,
1.3439818461440451,
1.3598972673004606,
1.3779439775587023,
1.3981208704326855,
1.418273806730323,
1.4340878115214444,
1.445565137158368,
1.45273959485914,
1.4599103316162523,
1.4670307626366115,
1.474100442252211,
1.48111939735681,
1.488134243479226,
1.495145456062699,
1.502153039909686,
1.5097346239790443,
1.5178910621476225,
1.5266220576235803,
1.5353620432989845,
1.545374152761467,
1.5566577309558969,
1.569199764184379,
1.5817353179379183,
1.5986915141218316,
1.6201194134191075,
1.6460167077689267,
1.6719047669939942,
1.6918150918099673,
1.7057507692361864,
1.7137640932265894,
1.7217732861435076,
1.7297783508684146,
1.737780757913635,
1.7459181197626403,
1.7541903672600494,
1.7625959636196327,
1.7709972329654864,
1.7793941781790852,
1.7877868031023945,
1.7961751115773796,
1.8045606557581335,
1.8139629377087627,
1.824381436842932,
1.835814081380705,
1.847275503201129,
1.862235672444246,
1.8806927939516411,
1.9026104247588487,
1.9245345552113182,
1.9482128147680253,
1.9736443063300082,
2.0008103857988204,
2.0279625371819305,
2.055100772005235,
2.082225099873648,
2.1183028432496016,
2.16337565384239,
2.2174389328192197,
2.271520771371253,
2.322373696229342,
2.3699990328716107,
2.414398437185221,
2.4587748041127506,
2.499446286796604,
2.5364027376452056,
2.5696460593920065,
2.602860350286428,
2.630396440815385,
2.6521732021128046,
2.6681935771243177,
2.6841217449620203,
2.6947834587664494,
2.714526681131686,
2.705443819238505,
2.714526681131686
2.6947834587664494,
2.6841217449620203,
2.6681935771243177,
2.6521732021128046,
2.630396440815385,
2.602860350286428,
2.5696460593920065,
2.5364027376452056,
2.499446286796604,
2.4587748041127506,
2.414398437185221,
2.3699990328716107,
2.322373696229342,
2.271520771371253,
2.2174389328192197,
2.16337565384239,
2.1183028432496016,
2.082225099873648,
2.055100772005235,
2.0279625371819305,
2.0008103857988204,
1.9736443063300082,
1.9482128147680253,
1.9245345552113182,
1.9026104247588487,
1.8806927939516411,
1.862235672444246,
1.847275503201129,
1.835814081380705,
1.824381436842932,
1.8139629377087627,
1.8045606557581335,
1.7961751115773796,
1.7877868031023945,
1.7793941781790852,
1.7709972329654864,
1.7625959636196327,
1.7541903672600494,
1.7459181197626403,
1.737780757913635,
1.7297783508684146,
1.7217732861435076,
1.7137640932265894,
1.7057507692361864,
1.6918150918099673,
1.6719047669939942,
1.6460167077689267,
1.6201194134191075,
1.5986915141218316,
1.5817353179379183,
1.569199764184379,
1.5566577309558969,
1.545374152761467,
1.5353620432989845,
1.5266220576235803,
1.5178910621476225,
1.5097346239790443,
1.502153039909686,
1.495145456062699,
1.488134243479226,
1.48111939735681,
1.474100442252211,
1.4670307626366115,
1.4599103316162523,
1.45273959485914,
1.445565137158368,
1.4340878115214444,
1.418273806730323,
1.3981208704326855,
1.3779439775587023,
1.3598972673004606,
1.3439818461440451,
1.3301807335621048,
1.316383926863083,
1.3040618749785347,
1.2932153453410835,
1.2838336236692311,
1.2744470198196236,
1.2650555289898042,
1.2556591482982988,
1.2428104897182262,
1.2264996957264114,
1.2067249615595257,
1.1869318618366975,
1.1672278304018044,
1.1476135933863398,
1.1280891949729075,
1.108555289615659,
1.0895983485572227,
1.0712204022764056,
1.0534217504465226,
1.0356155337864215,
1.017807767853702,
1.0
]
}
}

1033
examples/demo.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ Amplifier models and configuration
Equipment description defines equipment types and parameters.
It takes place in the default **eqpt_config.json** file.
By default **gnpy-transmission-example** uses **eqpt_config.json** file and that
By default **transmission_main_example.py** uses **eqpt_config.json** file and that
can be changed with **-e** or **--equipment** command line parameter.
2. Amplifier parameters and subtypes
@@ -227,7 +227,7 @@ In an opensource and multi-vendor environnement, it is needed to support differe
.. code-block:: json-object
"Edfa":[{
"type_variety": "openroadm_ila_low_noise",
"type_variety": "low_noise",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 12,
@@ -266,7 +266,7 @@ In an opensource and multi-vendor environnement, it is needed to support differe
4. advanced_config_from_json
#######################################
The build_oa_json.py library in ``gnpy/example-data/edfa_model/`` can be used to build the json file required for the amplifier advanced_model type_def:
The build_oa_json.py library in gnpy/examples/edfa_model can be used to build the json file required for the amplifier advanced_model type_def:
Update an existing json file with all the 96ch txt files for a given amplifier type
amplifier type 'OA_type1' is hard coded but can be modified and other types added

View File

@@ -13,6 +13,7 @@ import re
import sys
import json
import numpy as np
from gnpy.core.utils import lin2db, db2lin
"""amplifier file names
convert a set of amplifier files + input json definiton file into a valid edfa_json_file:
@@ -27,63 +28,60 @@ input json file in argument (defult = 'OA.json')
the json input file should have the following fields:
{
"nf_fit_coeff": "nf_filename.txt",
"nf_ripple": "nf_ripple_filename.txt",
"nf_ripple": "nf_ripple_filename.txt",
"gain_ripple": "DFG_filename.txt",
"dgt": "DGT_filename.txt",
}
"""
input_json_file_name = "OA.json" # default path
input_json_file_name = "OA.json" #default path
output_json_file_name = "default_edfa_config.json"
gain_ripple_field = "gain_ripple"
nf_ripple_field = "nf_ripple"
nf_fit_coeff = "nf_fit_coeff"
def read_file(field, file_name):
"""read and format the 96 channels txt files describing the amplifier NF and ripple
convert dfg into gain ripple by removing the mean component
"""
# with open(path + file_name,'r') as this_file:
#with open(path + file_name,'r') as this_file:
# data = this_file.read()
# data.strip()
#data.strip()
#data = re.sub(r"([0-9])([ ]{1,3})([0-9-+])",r"\1,\3",data)
#data = list(data.split(","))
#data = [float(x) for x in data]
data = np.loadtxt(file_name)
print(len(data), file_name)
if field == gain_ripple_field or field == nf_ripple_field:
# consider ripple excursion only to avoid redundant information
# because the max flat_gain is already given by the 'gain_flat' field in json
# remove the mean component
#consider ripple excursion only to avoid redundant information
#because the max flat_gain is already given by the 'gain_flat' field in json
#remove the mean component
print(file_name, ', mean value =', data.mean(), ' is substracted')
data = data - data.mean()
data = data.tolist()
return data
def input_json(path):
"""read the json input file and add all the 96 channels txt files
create the output json file with output_json_file_name"""
with open(path, 'r') as edfa_json_file:
with open(path,'r') as edfa_json_file:
amp_text = edfa_json_file.read()
amp_dict = json.loads(amp_text)
for k, v in amp_dict.items():
if re.search(r'.txt$', str(v)):
if re.search(r'.txt$',str(v)) :
amp_dict[k] = read_file(k, v)
amp_text = json.dumps(amp_dict, indent=4)
# print(amp_text)
with open(output_json_file_name, 'w') as edfa_json_file:
#print(amp_text)
with open(output_json_file_name,'w') as edfa_json_file:
edfa_json_file.write(amp_text)
if __name__ == '__main__':
if len(sys.argv) == 2:
path = sys.argv[1]
else:
path = input_json_file_name
input_json(path)
input_json(path)

View File

@@ -29,58 +29,24 @@
"allowed_for_design": false
},
{
"type_variety": "openroadm_ila_low_noise",
"type_variety": "low_noise",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"gain_min": 12,
"p_max": 22,
"nf_coef": [-8.104e-4,-6.221e-2,-5.889e-1,37.62],
"allowed_for_design": false
},
{
"type_variety": "openroadm_ila_standard",
"type_variety": "standard",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"gain_min": 12,
"p_max": 22,
"nf_coef": [-5.952e-4,-6.250e-2,-1.071,28.99],
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_preamp",
"type_def": "openroadm_preamp",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_preamp_typical_ver5",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-5.952e-4,-6.250e-2,-1.071,28.99],
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_preamp_worstcase_ver5",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-5.952e-4,-6.250e-2,-1.071,27.99],
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_booster",
"type_def": "openroadm_booster",
"gain_flatmax": 32,
"gain_min": 0,
"p_max": 22,
"allowed_for_design": false
},
{
"type_variety": "std_high_gain",
"type_def": "variable_gain",
"gain_flatmax": 35,
@@ -180,27 +146,47 @@
"Fiber":[{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"effective_area": 83e-12,
"pmd_coef": 1.265e-15
"gamma": 0.00127
},
{
"type_variety": "NZDF",
"dispersion": 0.5e-05,
"effective_area": 72e-12,
"pmd_coef": 1.265e-15
"gamma": 0.00146
},
{
"type_variety": "LOF",
"dispersion": 2.2e-05,
"effective_area": 125e-12,
"pmd_coef": 1.265e-15
"gamma": 0.000843
}
],
"RamanFiber":[{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"effective_area": 83e-12,
"pmd_coef": 1.265e-15
"gamma": 0.00127,
"raman_efficiency": {
"cr":[
0, 9.4E-06, 2.92E-05, 4.88E-05, 6.82E-05, 8.31E-05, 9.4E-05, 0.0001014, 0.0001069, 0.0001119,
0.0001217, 0.0001268, 0.0001365, 0.000149, 0.000165, 0.000181, 0.0001977, 0.0002192, 0.0002469,
0.0002749, 0.0002999, 0.0003206, 0.0003405, 0.0003592, 0.000374, 0.0003826, 0.0003841, 0.0003826,
0.0003802, 0.0003756, 0.0003549, 0.0003795, 0.000344, 0.0002933, 0.0002024, 0.0001158, 8.46E-05,
7.14E-05, 6.86E-05, 8.5E-05, 8.93E-05, 9.01E-05, 8.15E-05, 6.67E-05, 4.37E-05, 3.28E-05, 2.96E-05,
2.65E-05, 2.57E-05, 2.81E-05, 3.08E-05, 3.67E-05, 5.85E-05, 6.63E-05, 6.36E-05, 5.5E-05, 4.06E-05,
2.77E-05, 2.42E-05, 1.87E-05, 1.6E-05, 1.4E-05, 1.13E-05, 1.05E-05, 9.8E-06, 9.8E-06, 1.13E-05,
1.64E-05, 1.95E-05, 2.38E-05, 2.26E-05, 2.03E-05, 1.48E-05, 1.09E-05, 9.8E-06, 1.05E-05, 1.17E-05,
1.25E-05, 1.21E-05, 1.09E-05, 9.8E-06, 8.2E-06, 6.6E-06, 4.7E-06, 2.7E-06, 1.9E-06, 1.2E-06, 4E-07,
2E-07, 1E-07
],
"frequency_offset":[
0, 0.5e12, 1e12, 1.5e12, 2e12, 2.5e12, 3e12, 3.5e12, 4e12, 4.5e12, 5e12, 5.5e12, 6e12, 6.5e12, 7e12,
7.5e12, 8e12, 8.5e12, 9e12, 9.5e12, 10e12, 10.5e12, 11e12, 11.5e12, 12e12, 12.5e12, 12.75e12,
13e12, 13.25e12, 13.5e12, 14e12, 14.5e12, 14.75e12, 15e12, 15.5e12, 16e12, 16.5e12, 17e12,
17.5e12, 18e12, 18.25e12, 18.5e12, 18.75e12, 19e12, 19.5e12, 20e12, 20.5e12, 21e12, 21.5e12,
22e12, 22.5e12, 23e12, 23.5e12, 24e12, 24.5e12, 25e12, 25.5e12, 26e12, 26.5e12, 27e12, 27.5e12, 28e12,
28.5e12, 29e12, 29.5e12, 30e12, 30.5e12, 31e12, 31.5e12, 32e12, 32.5e12, 33e12, 33.5e12, 34e12, 34.5e12,
35e12, 35.5e12, 36e12, 36.5e12, 37e12, 37.5e12, 38e12, 38.5e12, 39e12, 39.5e12, 40e12, 40.5e12, 41e12,
41.5e12, 42e12
]
}
}
],
"Span":[{
@@ -220,8 +206,6 @@
"Roadm":[{
"target_pch_out_db": -20,
"add_drop_osnr": 38,
"pmd": 0,
"pdl": 0,
"restrictions": {
"preamp_variety_list":[],
"booster_variety_list":[]

View File

@@ -624,70 +624,6 @@
"con_out": null
}
},
{
"uid": "west edfa in Quimper",
"metadata": {
"location": {
"city": "Quimper",
"region": "RLD",
"latitude": 1.0,
"longitude": 1.0
}
},
"type": "Edfa",
"operational": {
"gain_target": null,
"tilt_target": 0
}
},
{
"uid": "west edfa in Ploermel",
"metadata": {
"location": {
"city": "Ploermel",
"region": "RLD",
"latitude": 1.0,
"longitude": 2.0
}
},
"type": "Edfa",
"operational": {
"gain_target": null,
"tilt_target": 0
}
},
{
"uid": "east edfa in Quimper",
"metadata": {
"location": {
"city": "Quimper",
"region": "RLD",
"latitude": 1.0,
"longitude": 1.0
}
},
"type": "Edfa",
"operational": {
"gain_target": null,
"tilt_target": 0
}
},
{
"uid": "east edfa in Ploermel",
"metadata": {
"location": {
"city": "Ploermel",
"region": "RLD",
"latitude": 1.0,
"longitude": 2.0
}
},
"type": "Edfa",
"operational": {
"gain_target": null,
"tilt_target": 0
}
},
{
"uid": "east edfa in Lannion_CAS to Corlay",
"metadata": {
@@ -699,7 +635,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -708,18 +644,41 @@
}
},
{
"uid": "east edfa in Lorient_KMA to Vannes_KBE",
"uid": "east edfa in Corlay to Loudeac",
"metadata": {
"location": {
"city": "Lorient_KMA",
"city": "Corlay",
"region": "RLD",
"latitude": 2.0,
"longitude": 3.0
"longitude": 1.0
}
},
"type": "Fused",
"params": {
"loss": 0
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
"tilt_target": 0,
"out_voa": null
}
},
{
"uid": "east edfa in Loudeac to Lorient_KMA",
"metadata": {
"location": {
"city": "Loudeac",
"region": "RLD",
"latitude": 2.0,
"longitude": 2.0
}
},
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
"tilt_target": 0,
"out_voa": null
}
},
{
@@ -733,7 +692,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -771,7 +730,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -790,7 +749,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -809,7 +768,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -828,7 +787,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -847,7 +806,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -866,7 +825,45 @@
}
},
"type": "Edfa",
"type_variety": "std_high_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
"tilt_target": 0,
"out_voa": null
}
},
{
"uid": "west edfa in Corlay to Loudeac",
"metadata": {
"location": {
"city": "Corlay",
"region": "RLD",
"latitude": 2.0,
"longitude": 1.0
}
},
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
"tilt_target": 0,
"out_voa": null
}
},
{
"uid": "west edfa in Loudeac to Lorient_KMA",
"metadata": {
"location": {
"city": "Loudeac",
"region": "RLD",
"latitude": 2.0,
"longitude": 2.0
}
},
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -961,7 +958,7 @@
}
},
"type": "Edfa",
"type_variety": "std_high_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -980,7 +977,7 @@
}
},
"type": "Edfa",
"type_variety": "std_medium_gain",
"type_variety": "std_low_gain",
"operational": {
"gain_target": null,
"delta_p": 1.0,
@@ -1025,6 +1022,21 @@
"tilt_target": 0,
"out_voa": null
}
},
{
"uid": "east edfa in Lorient_KMA to Vannes_KBE",
"metadata": {
"location": {
"city": "Lorient_KMA",
"region": "RLD",
"latitude": 2.0,
"longitude": 3.0
}
},
"type": "Fused",
"params": {
"loss": 0
}
}
],
"connections": [
@@ -1254,34 +1266,18 @@
},
{
"from_node": "fiber (Brest_KLA → Quimper)-",
"to_node": "west edfa in Quimper"
},
{
"from_node": "west edfa in Quimper",
"to_node": "fiber (Quimper → Lorient_KMA)-"
},
{
"from_node": "fiber (Lorient_KMA → Quimper)-",
"to_node": "east edfa in Quimper"
},
{
"from_node": "east edfa in Quimper",
"to_node": "fiber (Quimper → Brest_KLA)-"
},
{
"from_node": "fiber (Vannes_KBE → Ploermel)-",
"to_node": "west edfa in Ploermel"
},
{
"from_node": "west edfa in Ploermel",
"to_node": "fiber (Ploermel → Rennes_STA)-"
},
{
"from_node": "fiber (Rennes_STA → Ploermel)-",
"to_node": "east edfa in Ploermel"
},
{
"from_node": "east edfa in Ploermel",
"to_node": "fiber (Ploermel → Vannes_KBE)-"
},
{
@@ -1325,4 +1321,4 @@
"to_node": "trx Brest_KLA"
}
]
}
}

View File

@@ -14,8 +14,8 @@
"trx_mode": null,
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
@@ -39,8 +39,8 @@
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
@@ -104,8 +104,8 @@
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
@@ -129,8 +129,8 @@
"trx_mode": null,
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 75000000000.0,
@@ -154,8 +154,8 @@
"trx_mode": "mode 2",
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 75000000000.0,
@@ -179,8 +179,8 @@
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
@@ -204,8 +204,8 @@
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
@@ -229,8 +229,8 @@
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": null,
"M": null
"N": "null",
"M": "null"
}
],
"spacing": 75000000000.0,

569
examples/path_requests_run.py Executable file
View File

@@ -0,0 +1,569 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
path_requests_run.py
====================
Reads a JSON request file in accordance with the Yang model
for requesting path computation and returns path results in terms
of path and feasibilty.
See: draft-ietf-teas-yang-path-computation-01.txt
"""
from sys import exit
from argparse import ArgumentParser
from pathlib import Path
from collections import namedtuple
from logging import getLogger, basicConfig, CRITICAL, DEBUG, INFO
from json import dumps, loads
from numpy import mean
from gnpy.core.service_sheet import convert_service_sheet, Request_element, Element
from gnpy.core.utils import load_json
from gnpy.core.network import load_network, build_network, save_network, network_from_json
from gnpy.core.equipment import load_equipment, trx_mode_params, automatic_nch
from gnpy.core.elements import Transceiver, Roadm
from gnpy.core.utils import db2lin, lin2db
from gnpy.core.request import (Path_request, Result_element,
propagate, jsontocsv, Disjunction, compute_path_dsjctn,
requests_aggregation, propagate_and_optimize_mode,
BLOCKING_NOPATH, BLOCKING_NOMODE,
find_reversed_path)
from gnpy.core.exceptions import (ConfigurationError, EquipmentConfigError, NetworkTopologyError,
ServiceError, DisjunctionError)
import gnpy.core.ansi_escapes as ansi_escapes
from gnpy.core.spectrum_assignment import (build_oms_list, pth_assign_spectrum)
from copy import copy, deepcopy
from textwrap import dedent
from math import ceil
from flask import Flask, jsonify, make_response, request
from flask_restful import Api, Resource, reqparse, fields
#EQPT_LIBRARY_FILENAME = Path(__file__).parent / 'eqpt_config.json'
LOGGER = getLogger(__name__)
PARSER = ArgumentParser(description='A function that computes performances for a list of ' +
'services provided in a json file or an excel sheet.')
PARSER.add_argument('network_filename', nargs='?', type=Path,\
default=Path(__file__).parent / 'meshTopologyExampleV2.xls',\
help='input topology file in xls or json')
PARSER.add_argument('service_filename', nargs='?', type=Path,\
default=Path(__file__).parent / 'meshTopologyExampleV2.xls',\
help='input service file in xls or json')
PARSER.add_argument('eqpt_filename', nargs='?', type=Path,\
default=Path(__file__).parent / 'eqpt_config.json',\
help='input equipment library in json. Default is eqpt_config.json')
PARSER.add_argument('-bi', '--bidir', action='store_true',\
help='considers that all demands are bidir')
PARSER.add_argument('-v', '--verbose', action='count', default=0,\
help='increases verbosity for each occurence')
PARSER.add_argument('-o', '--output', type=Path)
PARSER.add_argument('-r', '--rest', action='count', default=0, help='use the REST API')
NETWORK_FILENAME = 'topoDemov1.json' #'disagregatedTopoDemov1.json' #
APP = Flask(__name__, static_url_path="")
API = Api(APP)
def requests_from_json(json_data, equipment):
""" converts the json data into a list of requests elements
"""
requests_list = []
for req in json_data['path-request']:
# init all params from request
params = {}
params['request_id'] = req['request-id']
params['source'] = req['source']
params['bidir'] = req['bidirectional']
params['destination'] = req['destination']
params['trx_type'] = req['path-constraints']['te-bandwidth']['trx_type']
params['trx_mode'] = req['path-constraints']['te-bandwidth']['trx_mode']
params['format'] = params['trx_mode']
params['spacing'] = req['path-constraints']['te-bandwidth']['spacing']
try:
nd_list = req['explicit-route-objects']['route-object-include-exclude']
except KeyError:
nd_list = []
params['nodes_list'] = [n['num-unnum-hop']['node-id'] for n in nd_list]
params['loose_list'] = [n['num-unnum-hop']['hop-type'] for n in nd_list]
# recover trx physical param (baudrate, ...) from type and mode
# in trx_mode_params optical power is read from equipment['SI']['default'] and
# nb_channel is computed based on min max frequency and spacing
trx_params = trx_mode_params(equipment, params['trx_type'], params['trx_mode'], True)
params.update(trx_params)
# print(trx_params['min_spacing'])
# optical power might be set differently in the request. if it is indicated then the
# params['power'] is updated
try:
if req['path-constraints']['te-bandwidth']['output-power']:
params['power'] = req['path-constraints']['te-bandwidth']['output-power']
except KeyError:
pass
# same process for nb-channel
f_min = params['f_min']
f_max_from_si = params['f_max']
try:
if req['path-constraints']['te-bandwidth']['max-nb-of-channel'] is not None:
nch = req['path-constraints']['te-bandwidth']['max-nb-of-channel']
params['nb_channel'] = nch
spacing = params['spacing']
params['f_max'] = f_min + nch*spacing
else:
params['nb_channel'] = automatic_nch(f_min, f_max_from_si, params['spacing'])
except KeyError:
params['nb_channel'] = automatic_nch(f_min, f_max_from_si, params['spacing'])
consistency_check(params, f_max_from_si)
try:
params['path_bandwidth'] = req['path-constraints']['te-bandwidth']['path_bandwidth']
except KeyError:
pass
requests_list.append(Path_request(**params))
return requests_list
def consistency_check(params, f_max_from_si):
""" checks that the requested parameters are consistant (spacing vs nb channel,
vs transponder mode...)
"""
f_min = params['f_min']
f_max = params['f_max']
max_recommanded_nb_channels = automatic_nch(f_min, f_max, params['spacing'])
if params['baud_rate'] is not None:
#implicitely means that a mode is defined with min_spacing
if params['min_spacing'] > params['spacing']:
msg = f'Request {params["request_id"]} has spacing below transponder ' +\
f'{params["trx_type"]} {params["trx_mode"]} min spacing value ' +\
f'{params["min_spacing"]*1e-9}GHz.\nComputation stopped'
print(msg)
LOGGER.critical(msg)
raise ServiceError(msg)
if f_max > f_max_from_si:
msg = dedent(f'''
Requested channel number {params["nb_channel"]}, baud rate {params["baud_rate"]} GHz and requested spacing {params["spacing"]*1e-9}GHz
is not consistent with frequency range {f_min*1e-12} THz, {f_max*1e-12} THz, min recommanded spacing {params["min_spacing"]*1e-9}GHz.
max recommanded nb of channels is {max_recommanded_nb_channels}
Computation stopped.''')
LOGGER.critical(msg)
raise ServiceError(msg)
def disjunctions_from_json(json_data):
""" reads the disjunction requests from the json dict and create the list
of requested disjunctions for this set of requests
"""
disjunctions_list = []
try:
temp_test = json_data['synchronization']
except KeyError:
temp_test = []
if temp_test:
for snc in json_data['synchronization']:
params = {}
params['disjunction_id'] = snc['synchronization-id']
params['relaxable'] = snc['svec']['relaxable']
params['link_diverse'] = 'link' in snc['svec']['disjointness']
params['node_diverse'] = 'node' in snc['svec']['disjointness']
params['disjunctions_req'] = snc['svec']['request-id-number']
disjunctions_list.append(Disjunction(**params))
return disjunctions_list
def load_requests(filename, eqpt_filename, bidir):
""" loads the requests from a json or an excel file into a data string
"""
if filename.suffix.lower() == '.xls':
LOGGER.info('Automatically converting requests from XLS to JSON')
try:
json_data = convert_service_sheet(filename, eqpt_filename, bidir=bidir)
except ServiceError as this_e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {this_e}')
exit(1)
else:
with open(filename, encoding='utf-8') as my_f:
json_data = loads(my_f.read())
return json_data
def compute_path_with_disjunction(network, equipment, pathreqlist, pathlist):
""" use a list but a dictionnary might be helpful to find path based on request_id
TODO change all these req, dsjct, res lists into dict !
"""
path_res_list = []
reversed_path_res_list = []
propagated_reversed_path_res_list = []
for i, pathreq in enumerate(pathreqlist):
# use the power specified in requests but might be different from the one
# specified for design the power is an optional parameter for requests
# definition if optional, use the one defines in eqt_config.json
p_db = lin2db(pathreq.power*1e3)
p_total_db = p_db + lin2db(pathreq.nb_channel)
print(f'request {pathreq.request_id}')
print(f'Computing path from {pathreq.source} to {pathreq.destination}')
# adding first node to be clearer on the output
print(f'with path constraint: {[pathreq.source] + pathreq.nodes_list}')
# pathlist[i] contains the whole path information for request i
# last element is a transciver and where the result of the propagation is
# recorded.
# Important Note: since transceivers attached to roadms are actually logical
# elements to simulate performance, several demands having the same destination
# may use the same transponder for the performance simulation. This is why
# we use deepcopy: to ensure that each propagation is recorded and not overwritten
total_path = deepcopy(pathlist[i])
print(f'Computed path (roadms):{[e.uid for e in total_path if isinstance(e, Roadm)]}')
# for debug
# print(f'{pathreq.baud_rate} {pathreq.power} {pathreq.spacing} {pathreq.nb_channel}')
if total_path:
if pathreq.baud_rate is not None:
# means that at this point the mode was entered/forced by user and thus a
# baud_rate was defined
total_path = propagate(total_path, pathreq, equipment)
temp_snr01nm = round(mean(total_path[-1].snr+lin2db(pathreq.baud_rate/(12.5e9))), 2)
if temp_snr01nm < pathreq.OSNR:
msg = f'\tWarning! Request {pathreq.request_id} computed path from' +\
f' {pathreq.source} to {pathreq.destination} does not pass with' +\
f' {pathreq.tsp_mode}\n\tcomputedSNR in 0.1nm = {temp_snr01nm} ' +\
f'- required osnr {pathreq.OSNR}'
print(msg)
LOGGER.warning(msg)
pathreq.blocking_reason = 'MODE_NOT_FEASIBLE'
else:
total_path, mode = propagate_and_optimize_mode(total_path, pathreq, equipment)
# if no baudrate satisfies spacing, no mode is returned and the last explored mode
# a warning is shown in the propagate_and_optimize_mode
# propagate_and_optimize_mode function returns the mode with the highest bitrate
# that passes. if no mode passes, then a attribute blocking_reason is added on
# pathreq that contains the reason for blocking: 'NO_PATH', 'NO_FEASIBLE_MODE', ...
try:
if pathreq.blocking_reason in BLOCKING_NOPATH:
total_path = []
elif pathreq.blocking_reason in BLOCKING_NOMODE:
pathreq.baud_rate = mode['baud_rate']
pathreq.tsp_mode = mode['format']
pathreq.format = mode['format']
pathreq.OSNR = mode['OSNR']
pathreq.tx_osnr = mode['tx_osnr']
pathreq.bit_rate = mode['bit_rate']
# other blocking reason should not appear at this point
except AttributeError:
pathreq.baud_rate = mode['baud_rate']
pathreq.tsp_mode = mode['format']
pathreq.format = mode['format']
pathreq.OSNR = mode['OSNR']
pathreq.tx_osnr = mode['tx_osnr']
pathreq.bit_rate = mode['bit_rate']
# reversed path is needed for correct spectrum assignment
reversed_path = find_reversed_path(pathlist[i])
if pathreq.bidir:
# only propagate if bidir is true, but needs the reversed path anyway for
# correct spectrum assignment
rev_p = deepcopy(reversed_path)
print(f'\n\tPropagating Z to A direction {pathreq.destination} to {pathreq.source}')
print(f'\tPath (roadsm) {[r.uid for r in rev_p if isinstance(r,Roadm)]}\n')
propagated_reversed_path = propagate(rev_p, pathreq, equipment)
temp_snr01nm = round(mean(propagated_reversed_path[-1].snr +\
lin2db(pathreq.baud_rate/(12.5e9))), 2)
if temp_snr01nm < pathreq.OSNR:
msg = f'\tWarning! Request {pathreq.request_id} computed path from' +\
f' {pathreq.source} to {pathreq.destination} does not pass with' +\
f' {pathreq.tsp_mode}\n' +\
f'\tcomputedSNR in 0.1nm = {temp_snr01nm} - required osnr {pathreq.OSNR}'
print(msg)
LOGGER.warning(msg)
# TODO selection of mode should also be on reversed direction !!
pathreq.blocking_reason = 'MODE_NOT_FEASIBLE'
else:
propagated_reversed_path = []
else:
msg = 'Total path is empty. No propagation'
print(msg)
LOGGER.info(msg)
reversed_path = []
propagated_reversed_path = []
path_res_list.append(total_path)
reversed_path_res_list.append(reversed_path)
propagated_reversed_path_res_list.append(propagated_reversed_path)
# print to have a nice output
print('')
return path_res_list, reversed_path_res_list, propagated_reversed_path_res_list
def correct_route_list(network, pathreqlist):
""" prepares the format of route list of nodes to be consistant
remove wrong names, remove endpoints
also correct source and destination
"""
anytype = [n.uid for n in network.nodes()]
# TODO there is a problem of identification of fibers in case of parallel fibers
# between two adjacent roadms so fiber constraint is not supported
transponders = [n.uid for n in network.nodes() if isinstance(n, Transceiver)]
for pathreq in pathreqlist:
for i, n_id in enumerate(pathreq.nodes_list):
# replace possibly wrong name with a formated roadm name
# print(n_id)
if n_id not in anytype:
# find nodes name that include constraint among all possible names except
# transponders (not yet supported as constraints).
nodes_suggestion = [uid for uid in anytype \
if n_id.lower() in uid.lower() and uid not in transponders]
if pathreq.loose_list[i] == 'LOOSE':
if len(nodes_suggestion) > 0:
new_n = nodes_suggestion[0]
print(f'invalid route node specified:\
\n\'{n_id}\', replaced with \'{new_n}\'')
pathreq.nodes_list[i] = new_n
else:
print(f'\x1b[1;33;40m'+f'invalid route node specified \'{n_id}\',' +\
f' could not use it as constraint, skipped!'+'\x1b[0m')
pathreq.nodes_list.remove(n_id)
pathreq.loose_list.pop(i)
else:
msg = f'\x1b[1;33;40m'+f'could not find node: {n_id} in network topology.' +\
f' Strict constraint can not be applied.' + '\x1b[0m'
LOGGER.critical(msg)
raise ValueError(msg)
if pathreq.source not in transponders:
msg = f'\x1b[1;31;40m' + f'Request: {pathreq.request_id}: could not find' +\
f' transponder source: {pathreq.source}.'+'\x1b[0m'
LOGGER.critical(msg)
print(f'{msg}\nComputation stopped.')
raise ServiceError(msg)
if pathreq.destination not in transponders:
msg = f'\x1b[1;31;40m'+f'Request: {pathreq.request_id}: could not find' +\
f' transponder destination: {pathreq.destination}.'+'\x1b[0m'
LOGGER.critical(msg)
print(f'{msg}\nComputation stopped.')
raise ServiceError(msg)
# TODO remove endpoints from this list in case they were added by the user
# in the xls or json files
return pathreqlist
def correct_disjn(disjn):
""" clean disjunctions to remove possible repetition
"""
local_disjn = disjn.copy()
for elem in local_disjn:
for dis_elem in local_disjn:
if set(elem.disjunctions_req) == set(dis_elem.disjunctions_req) and\
elem.disjunction_id != dis_elem.disjunction_id:
local_disjn.remove(dis_elem)
return local_disjn
def path_result_json(pathresult):
""" create the response dictionnary
"""
data = {
'response': [n.json for n in pathresult]
}
return data
def compute_requests(network, data, equipment):
""" Main program calling functions
"""
# Build the network once using the default power defined in SI in eqpt config
# TODO power density: db2linp(ower_dbm": 0)/power_dbm": 0 * nb channels as defined by
# spacing, f_min and f_max
p_db = equipment['SI']['default'].power_dbm
p_total_db = p_db + lin2db(automatic_nch(equipment['SI']['default'].f_min,\
equipment['SI']['default'].f_max, equipment['SI']['default'].spacing))
build_network(network, equipment, p_db, p_total_db)
save_network(ARGS.network_filename, network)
oms_list = build_oms_list(network, equipment)
try:
rqs = requests_from_json(data, equipment)
except ServiceError as this_e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {this_e}')
raise this_e
# check that request ids are unique. Non unique ids, may
# mess the computation: better to stop the computation
all_ids = [r.request_id for r in rqs]
if len(all_ids) != len(set(all_ids)):
for item in list(set(all_ids)):
all_ids.remove(item)
msg = f'Requests id {all_ids} are not unique'
LOGGER.critical(msg)
raise ServiceError(msg)
try:
rqs = correct_route_list(network, rqs)
except ServiceError as this_e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {this_e}')
raise this_e
#exit(1)
# pths = compute_path(network, equipment, rqs)
dsjn = disjunctions_from_json(data)
print('\x1b[1;34;40m' + f'List of disjunctions' + '\x1b[0m')
print(dsjn)
# need to warn or correct in case of wrong disjunction form
# disjunction must not be repeated with same or different ids
dsjn = correct_disjn(dsjn)
# Aggregate demands with same exact constraints
print('\x1b[1;34;40m' + f'Aggregating similar requests' + '\x1b[0m')
rqs, dsjn = requests_aggregation(rqs, dsjn)
# TODO export novel set of aggregated demands in a json file
print('\x1b[1;34;40m' + 'The following services have been requested:' + '\x1b[0m')
print(rqs)
print('\x1b[1;34;40m' + f'Computing all paths with constraints' + '\x1b[0m')
try:
pths = compute_path_dsjctn(network, equipment, rqs, dsjn)
except DisjunctionError as this_e:
print(f'{ansi_escapes.red}Disjunction error:{ansi_escapes.reset} {this_e}')
raise this_e
print('\x1b[1;34;40m' + f'Propagating on selected path' + '\x1b[0m')
propagatedpths, reversed_pths, reversed_propagatedpths = \
compute_path_with_disjunction(network, equipment, rqs, pths)
# Note that deepcopy used in compute_path_with_disjunction returns
# a list of nodes which are not belonging to network (they are copies of the node objects).
# so there can not be propagation on these nodes.
pth_assign_spectrum(pths, rqs, oms_list, reversed_pths)
print('\x1b[1;34;40m'+f'Result summary'+ '\x1b[0m')
header = ['req id', ' demand', ' snr@bandwidth A-Z (Z-A)', ' snr@0.1nm A-Z (Z-A)',\
' Receiver minOSNR', ' mode', ' Gbit/s', ' nb of tsp pairs',\
'N,M or blocking reason']
data = []
data.append(header)
for i, this_p in enumerate(propagatedpths):
rev_pth = reversed_propagatedpths[i]
if rev_pth and this_p:
psnrb = f'{round(mean(this_p[-1].snr),2)} ({round(mean(rev_pth[-1].snr),2)})'
psnr = f'{round(mean(this_p[-1].snr_01nm), 2)}' +\
f' ({round(mean(rev_pth[-1].snr_01nm),2)})'
elif this_p:
psnrb = f'{round(mean(this_p[-1].snr),2)}'
psnr = f'{round(mean(this_p[-1].snr_01nm),2)}'
try :
if rqs[i].blocking_reason in BLOCKING_NOPATH:
line = [f'{rqs[i].request_id}', f' {rqs[i].source} to {rqs[i].destination} :',\
f'-', f'-', f'-', f'{rqs[i].tsp_mode}', f'{round(rqs[i].path_bandwidth * 1e-9,2)}',\
f'-', f'{rqs[i].blocking_reason}']
else:
line = [f'{rqs[i].request_id}', f' {rqs[i].source} to {rqs[i].destination} : ', psnrb,\
psnr, f'-', f'{rqs[i].tsp_mode}', f'{round(rqs[i].path_bandwidth * 1e-9, 2)}',\
f'-', f'{rqs[i].blocking_reason}']
except AttributeError:
line = [f'{rqs[i].request_id}', f' {rqs[i].source} to {rqs[i].destination} : ', psnrb,\
psnr, f'{rqs[i].OSNR}', f'{rqs[i].tsp_mode}', f'{round(rqs[i].path_bandwidth * 1e-9,2)}',\
f'{ceil(rqs[i].path_bandwidth / rqs[i].bit_rate) }', f'({rqs[i].N},{rqs[i].M})']
data.append(line)
col_width = max(len(word) for row in data for word in row[2:]) # padding
firstcol_width = max(len(row[0]) for row in data) # padding
secondcol_width = max(len(row[1]) for row in data) # padding
for row in data:
firstcol = ''.join(row[0].ljust(firstcol_width))
secondcol = ''.join(row[1].ljust(secondcol_width))
remainingcols = ''.join(word.center(col_width, ' ') for word in row[2:])
print(f'{firstcol} {secondcol} {remainingcols}')
print('\x1b[1;33;40m'+f'Result summary shows mean SNR and OSNR (average over all channels)' +\
'\x1b[0m')
return propagatedpths, reversed_propagatedpths, rqs
def launch_cli(network, data, equipment):
""" Compute requests using network, data and equipment with client line interface
"""
propagatedpths, reversed_propagatedpths, rqs = compute_requests(network, data, equipment)
#Generate the output
if ARGS.output :
result = []
# assumes that list of rqs and list of propgatedpths have same order
for i, pth in enumerate(propagatedpths):
result.append(Result_element(rqs[i], pth, reversed_propagatedpths[i]))
temp = path_result_json(result)
fnamecsv = f'{str(ARGS.output)[0:len(str(ARGS.output))-len(str(ARGS.output.suffix))]}.csv'
fnamejson = f'{str(ARGS.output)[0:len(str(ARGS.output))-len(str(ARGS.output.suffix))]}.json'
with open(fnamejson, 'w', encoding='utf-8') as fjson:
fjson.write(dumps(path_result_json(result), indent=2, ensure_ascii=False))
with open(fnamecsv, "w", encoding='utf-8') as fcsv:
jsontocsv(temp, equipment, fcsv)
print('\x1b[1;34;40m'+f'saving in {ARGS.output} and {fnamecsv}'+ '\x1b[0m')
class GnpyAPI(Resource):
""" Compute requests using network, data and equipment with rest api
"""
def get(self):
return {"ping": True}, 200
def post(self):
data = request.get_json()
equipment = load_equipment('examples/2019-demo-equipment.json')
topo_json = load_json('examples/2019-demo-topology.json')
network = network_from_json(topo_json, equipment)
try:
propagatedpths, reversed_propagatedpths, rqs = compute_requests(network, data, equipment)
# Generate the output
result = []
#assumes that list of rqs and list of propgatedpths have same order
for i, pth in enumerate(propagatedpths):
result.append(Result_element(rqs[i], pth, reversed_propagatedpths[i]))
return {"result":path_result_json(result)}, 201
except ServiceError as this_e:
msg = f'Service error: {this_e}'
return {"result": msg}, 400
API.add_resource(GnpyAPI, '/gnpy-experimental')
def main(args):
""" main function that calls all functions
"""
LOGGER.info(f'Computing path requests {args.service_filename} into JSON format')
print('\x1b[1;34;40m' +\
f'Computing path requests {args.service_filename} into JSON format'+ '\x1b[0m')
# for debug
# print( args.eqpt_filename)
try:
data = load_requests(args.service_filename, args.eqpt_filename, args.bidir)
equipment = load_equipment(args.eqpt_filename)
network = load_network(args.network_filename, equipment)
except EquipmentConfigError as this_e:
print(f'{ansi_escapes.red}Configuration error in the equipment library:{ansi_escapes.reset} {this_e}')
exit(1)
except NetworkTopologyError as this_e:
print(f'{ansi_escapes.red}Invalid network definition:{ansi_escapes.reset} {this_e}')
exit(1)
except ConfigurationError as this_e:
print(f'{ansi_escapes.red}Configuration error:{ansi_escapes.reset} {this_e}')
exit(1)
except ServiceError as this_e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {this_e}')
exit(1)
# input_str = raw_input("How will you use your program: c:[cli] , a:[api] ?")
# print(input_str)
#
if ((args.rest == 1) and (args.output is None)):
print('you have chosen the rest mode')
APP.run(host='0.0.0.0', port=5000, debug=True)
elif ((args.rest > 1) or ((args.rest == 1) and (args.output is not None))):
print('command is not well formulated')
else:
launch_cli(network, data, equipment)
if __name__ == '__main__':
ARGS = PARSER.parse_args()
basicConfig(level={2: DEBUG, 1: INFO, 0: CRITICAL}.get(ARGS.verbose, DEBUG))
main(ARGS)

View File

@@ -20,12 +20,12 @@
"temperature": 283,
"raman_pumps": [
{
"power": 224.403e-3,
"power": 200e-3,
"frequency": 205e12,
"propagation_direction": "counterprop"
},
{
"power": 231.135e-3,
"power": 206e-3,
"frequency": 201e12,
"propagation_direction": "counterprop"
}
@@ -49,21 +49,6 @@
}
}
},
{
"uid": "Fused1",
"type": "Fused",
"params": {
"loss": 0
},
"metadata": {
"location": {
"latitude": 1.5,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Edfa1",
"type": "Edfa",
@@ -103,10 +88,6 @@
},
{
"from_node": "Span1",
"to_node": "Fused1"
},
{
"from_node": "Fused1",
"to_node": "Edfa1"
},
{

180
examples/serviceDemov1.json Normal file
View File

@@ -0,0 +1,180 @@
{
"path-request": [
{
"request-id": "0",
"source": "trx site_a",
"destination": "trx site_b",
"src-tp-id": "trx site_a",
"dst-tp-id": "trx site_b",
"bidirectional": false,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Voyager",
"trx_mode": null,
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 100000000000.0
}
}
},
{
"request-id": "1",
"source": "trx site_a",
"destination": "trx site_b",
"src-tp-id": "trx site_a",
"dst-tp-id": "trx site_b",
"bidirectional": false,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Voyager",
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 200000000000.0
}
},
"explicit-route-objects": {
"route-object-include-exclude": [
{
"explicit-route-usage": "route-include-ero",
"index": 0,
"num-unnum-hop": {
"node-id": "Span1ab",
"link-tp-id": "link-tp-id is not used",
"hop-type": "STRICT"
}
}
]
}
},
{
"request-id": "2",
"source": "trx site_a",
"destination": "trx site_b",
"src-tp-id": "trx site_a",
"dst-tp-id": "trx site_b",
"bidirectional": false,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Voyager",
"trx_mode": "mode 1",
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 200000000000.0
}
},
"explicit-route-objects": {
"route-object-include-exclude": [
{
"explicit-route-usage": "route-include-ero",
"index": 0,
"num-unnum-hop": {
"node-id": "roadm site_c",
"link-tp-id": "link-tp-id is not used",
"hop-type": "STRICT"
}
}
]
}
},
{
"request-id": "3",
"source": "trx site_a",
"destination": "trx site_b",
"src-tp-id": "trx site_a",
"dst-tp-id": "trx site_b",
"bidirectional": false,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Voyager",
"trx_mode": null,
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 100000000000.0
}
}
},
{
"request-id": "4",
"source": "trx site_a",
"destination": "trx site_b",
"src-tp-id": "trx site_a",
"dst-tp-id": "trx site_b",
"bidirectional": false,
"path-constraints": {
"te-bandwidth": {
"technology": "flexi-grid",
"trx_type": "Voyager",
"trx_mode": null,
"effective-freq-slot": [
{
"N": "null",
"M": "null"
}
],
"spacing": 50000000000.0,
"max-nb-of-channel": null,
"output-power": null,
"path_bandwidth": 100000000000.0
}
}
}
],
"synchronization": [
{
"synchronization-id": "x",
"svec": {
"relaxable": "false",
"disjointness": "node link",
"request-id-number": [
"3",
"0"
]
}
},
{
"synchronization-id": "y",
"svec": {
"relaxable": "false",
"disjointness": "node link",
"request-id-number": [
"4",
"3",
"0"
]
}
}
]
}

14
examples/sim_params.json Normal file
View File

@@ -0,0 +1,14 @@
{
"raman_computed_channels": [1, 18, 37, 56, 75],
"raman_parameters": {
"flag_raman": true,
"space_resolution": 10e3,
"tolerance": 1e-8
},
"nli_parameters": {
"nli_method_name": "ggn_spectrally_separated",
"wdm_grid_size": 50e9,
"dispersion_tolerance": 1,
"phase_shift_tollerance": 0.1
}
}

View File

@@ -0,0 +1,303 @@
{ "nf_fit_coeff": [
0.000168241,
0.0469961,
0.0359549,
5.82851
],
"f_min": 191.35e12,
"f_max": 196.1e12,
"nf_ripple": [
-0.3110761646066259,
-0.3110761646066259,
-0.31110274831665313,
-0.31419329378173544,
-0.3172854168606314,
-0.32037911876162584,
-0.3233255190215882,
-0.31624321721895354,
-0.30915729645781326,
-0.30206775396360075,
-0.2949045115165272,
-0.26632156113294336,
-0.23772399031437283,
-0.20911178784023846,
-0.18048410390821285,
-0.14379944379052215,
-0.10709599992470213,
-0.07037375788020579,
-0.03372858157230583,
-0.015660302006048,
0.0024172385953583004,
0.020504047353947653,
0.03860013139908377,
0.05670549786742816,
0.07482015390297145,
0.0838762040768461,
0.09284481475528361,
0.1018180306253394,
0.11079585523492333,
0.1020395478432815,
0.09310160456603413,
0.08415906712621996,
0.07521193198077789,
0.0676340601339394,
0.06005437964543287,
0.052470799141237305,
0.044883315610536455,
0.037679759069084225,
0.03047647598902483,
0.02326948274513522,
0.01605877647020772,
0.021248462316134083,
0.02657315875107553,
0.03190060058247842,
0.03723078993416436,
0.04256372893215024,
0.047899419704645264,
0.03915515813685565,
0.030289222542492025,
0.021418708618354456,
0.012573926129294415,
0.006240488799898697,
-9.622162373026585e-05,
-0.006436207679519103,
-0.012779471908040341,
-0.02038153550619876,
-0.027999803010447587,
-0.035622012697103154,
-0.043236398934156144,
-0.04493583574805963,
-0.04663615264317309,
-0.048337350303318156,
-0.050039429413028365,
-0.051742390657545205,
-0.05342028484370278,
-0.05254242298580185,
-0.05166410580536087,
-0.05078533294804249,
-0.04990610405914272,
-0.05409792133358102,
-0.05832916277634124,
-0.06256260169582961,
-0.06660356886269536,
-0.04779792991567815,
-0.028982516728038848,
-0.010157321677553965,
0.00861320615127981,
0.01913736978785662,
0.029667009055877668,
0.04020212822983975,
0.050742731588695494,
0.061288823415841555,
0.07184040799914815,
0.1043252636301016,
0.13687829834471027,
0.1694483010211072,
0.202035284929368,
0.23624619427167134,
0.27048596623174515,
0.30474360397422756,
0.3390191214858807,
0.36358851509924695,
0.38814205928193013,
0.41270842850729195,
0.4372876328262819,
0.4372876328262819
],
"dgt": [
2.714526681131686,
2.705443819238505,
2.6947834587664494,
2.6841217449620203,
2.6681935771243177,
2.6521732021128046,
2.630396440815385,
2.602860350286428,
2.5696460593920065,
2.5364027376452056,
2.499446286796604,
2.4587748041127506,
2.414398437185221,
2.3699990328716107,
2.322373696229342,
2.271520771371253,
2.2174389328192197,
2.16337565384239,
2.1183028432496016,
2.082225099873648,
2.055100772005235,
2.0279625371819305,
2.0008103857988204,
1.9736443063300082,
1.9482128147680253,
1.9245345552113182,
1.9026104247588487,
1.8806927939516411,
1.862235672444246,
1.847275503201129,
1.835814081380705,
1.824381436842932,
1.8139629377087627,
1.8045606557581335,
1.7961751115773796,
1.7877868031023945,
1.7793941781790852,
1.7709972329654864,
1.7625959636196327,
1.7541903672600494,
1.7459181197626403,
1.737780757913635,
1.7297783508684146,
1.7217732861435076,
1.7137640932265894,
1.7057507692361864,
1.6918150918099673,
1.6719047669939942,
1.6460167077689267,
1.6201194134191075,
1.5986915141218316,
1.5817353179379183,
1.569199764184379,
1.5566577309558969,
1.545374152761467,
1.5353620432989845,
1.5266220576235803,
1.5178910621476225,
1.5097346239790443,
1.502153039909686,
1.495145456062699,
1.488134243479226,
1.48111939735681,
1.474100442252211,
1.4670307626366115,
1.4599103316162523,
1.45273959485914,
1.445565137158368,
1.4340878115214444,
1.418273806730323,
1.3981208704326855,
1.3779439775587023,
1.3598972673004606,
1.3439818461440451,
1.3301807335621048,
1.316383926863083,
1.3040618749785347,
1.2932153453410835,
1.2838336236692311,
1.2744470198196236,
1.2650555289898042,
1.2556591482982988,
1.2428104897182262,
1.2264996957264114,
1.2067249615595257,
1.1869318618366975,
1.1672278304018044,
1.1476135933863398,
1.1280891949729075,
1.108555289615659,
1.0895983485572227,
1.0712204022764056,
1.0534217504465226,
1.0356155337864215,
1.017807767853702,
1.0
],
"gain_ripple": [
0.1359703369791596,
0.11822862697916037,
0.09542181697916163,
0.06245819697916133,
0.02602813697916062,
-0.0036199830208403228,
-0.018326963020840026,
-0.0246928330208398,
-0.016792253020838643,
-0.0028138630208403015,
0.017572956979162058,
0.038328296979159404,
0.054956336979159914,
0.0670723869791594,
0.07091459697916136,
0.07094413697916124,
0.07114372697916238,
0.07533675697916209,
0.08731066697916035,
0.10313984697916112,
0.12276252697916235,
0.14239527697916188,
0.15945681697916214,
0.1739275269791598,
0.1767381569791624,
0.17037189697916233,
0.15216302697916007,
0.13114358697916018,
0.10802383697916085,
0.08548825697916129,
0.06916723697916183,
0.05848224697916038,
0.05447361697916264,
0.05154489697916276,
0.04946107697915991,
0.04717897697916129,
0.04551704697916037,
0.04467697697916151,
0.04072968697916224,
0.03285456697916089,
0.023488786979161347,
0.01659282697915998,
0.013321846979160057,
0.011234826979162449,
0.01030063697916006,
0.00936596697916059,
0.00874012697916271,
0.00842583697916055,
0.006965146979162284,
0.0040435869791615175,
0.0007104669791608842,
-0.0015763130208377163,
-0.006936193020838033,
-0.016475303020840215,
-0.028748483020837767,
-0.039618433020837784,
-0.051112303020840244,
-0.06468462302083822,
-0.07868024302083754,
-0.09101254302083817,
-0.10103437302083762,
-0.11041488302083735,
-0.11916081302083725,
-0.12789859302083784,
-0.1353792530208402,
-0.14160178302083892,
-0.1455411330208385,
-0.1484450830208388,
-0.14823350302084037,
-0.14591937302083835,
-0.1409032730208395,
-0.13525493302083902,
-0.1279646530208396,
-0.11963431302083904,
-0.11089282302084058,
-0.1027863830208382,
-0.09717347302083823,
-0.09343261302083761,
-0.0913487130208388,
-0.08906007302083907,
-0.0865687230208394,
-0.08407607302083875,
-0.07844600302084004,
-0.06968090302083851,
-0.05947139302083926,
-0.05095282302083959,
-0.042428283020839785,
-0.03218106302083967,
-0.01819858302084043,
-0.0021726530208390216,
0.01393231697916164,
0.028098946979159933,
0.040326236979161934,
0.05257029697916238,
0.06479749697916048,
0.07704745697916238
]
}

703
examples/topoDemov1.json Normal file
View File

@@ -0,0 +1,703 @@
{
"elements": [
{
"uid": "trx site_a",
"type": "Transceiver",
"metadata": {
"location": {
"latitude": 0,
"longitude": 0,
"city": "Site a",
"region": ""
}
}
},
{
"uid": "roadm site_a",
"type": "Roadm",
"params": {
"target_pch_out_db": -20,
"restrictions": {
"preamp_variety_list": [],
"booster_variety_list": []
}
},
"metadata": {
"location": {
"latitude": 0,
"longitude": 0,
"city": "Site a",
"region": ""
}
}
},
{
"uid": "Span1ab",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 100.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Span1ba",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 100.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Span2ab",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 80.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Span2ba",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 80.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "roadm site_b",
"type": "Roadm",
"params": {
"target_pch_out_db": -20,
"restrictions": {
"preamp_variety_list": [],
"booster_variety_list": []
}
},
"metadata": {
"location": {
"latitude": 0,
"longitude": 0,
"city": "Site b",
"region": ""
}
}
},
{
"uid": "trx site_b",
"type": "Transceiver",
"metadata": {
"location": {
"latitude": 2,
"longitude": 0,
"city": "Site b",
"region": ""
}
}
},
{
"uid": "booster1 site_a",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site a",
"region": ""
}
}
},
{
"uid": "preamp site_b",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site b",
"region": ""
}
}
},
{
"uid": "booster1 site_b",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site b",
"region": ""
}
}
},
{
"uid": "preamp1 site_a",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_a",
"region": ""
}
}
},
{
"uid": "booster2 site_a",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site a",
"region": ""
}
}
},
{
"uid": "preamp2 site_b",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_b",
"region": ""
}
}
},
{
"uid": "booster2 site_b",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site b",
"region": ""
}
}
},
{
"uid": "preamp2 site_a",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_a",
"region": ""
}
}
},
{
"uid": "booster3 site_a",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site a",
"region": ""
}
}
},
{
"uid": "preamp3 site_b",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_b",
"region": ""
}
}
},
{
"uid": "booster3 site_b",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site b",
"region": ""
}
}
},
{
"uid": "preamp3 site_a",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_a",
"region": ""
}
}
},
{
"uid": "roadm site_c",
"type": "Roadm",
"params": {
"target_pch_out_db": -20,
"restrictions": {
"preamp_variety_list": [],
"booster_variety_list": []
}
},
"metadata": {
"location": {
"latitude": 0,
"longitude": 0,
"city": "Site c",
"region": ""
}
}
},
{
"uid": "booster1 site_c",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site c",
"region": ""
}
}
},
{
"uid": "preamp1 site_c",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_c",
"region": ""
}
}
},
{
"uid": "booster2 site_c",
"type": "Edfa",
"type_variety": "std_medium_gain",
"operational": {
"gain_target": 19.0,
"delta_p": -1.0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site c",
"region": ""
}
}
},
{
"uid": "preamp2 site_c",
"type": "Edfa",
"type_variety": "std_low_gain",
"operational": {
"gain_target": 18.0,
"delta_p": 0,
"tilt_target": 0,
"out_voa": 0
},
"metadata": {
"location": {
"latitude": 0.5,
"longitude": 0.0,
"city": "Site_c",
"region": ""
}
}
},
{
"uid": "Span1ac",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 80.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Span1ca",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 80.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Span1bc",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 80.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
},
{
"uid": "Span1cb",
"type": "Fiber",
"type_variety": "SSMF",
"params": {
"type_variety": "SSMF",
"length": 80.0,
"loss_coef": 0.2,
"length_units": "km",
"att_in": 0,
"con_in": 0.5,
"con_out": 0.5
},
"metadata": {
"location": {
"latitude": 1,
"longitude": 0,
"city": null,
"region": ""
}
}
}
],
"connections": [
{
"from_node": "trx site_a",
"to_node": "roadm site_a"
},
{
"from_node": "roadm site_a",
"to_node": "booster1 site_a"
},
{
"from_node": "booster1 site_a",
"to_node": "Span1ab"
},
{
"from_node": "Span1ab",
"to_node": "preamp site_b"
},
{
"from_node": "preamp site_b",
"to_node": "roadm site_b"
},
{
"from_node": "roadm site_b",
"to_node": "trx site_b"
},
{
"from_node": "roadm site_a",
"to_node": "booster2 site_a"
},
{
"from_node": "booster2 site_a",
"to_node": "Span2ab"
},
{
"from_node": "Span2ab",
"to_node": "preamp2 site_b"
},
{
"from_node": "preamp2 site_b",
"to_node": "roadm site_b"
},
{
"from_node": "roadm site_b",
"to_node": "booster1 site_b"
},
{
"from_node": "booster1 site_b",
"to_node": "Span1ba"
},
{
"from_node": "Span1ba",
"to_node": "preamp1 site_a"
},
{
"from_node": "preamp1 site_a",
"to_node": "roadm site_a"
},
{
"from_node": "roadm site_b",
"to_node": "booster2 site_b"
},
{
"from_node": "booster2 site_b",
"to_node": "Span2ba"
},
{
"from_node": "Span2ba",
"to_node": "preamp2 site_a"
},
{
"from_node": "preamp2 site_a",
"to_node": "roadm site_a"
},
{
"from_node": "roadm site_a",
"to_node": "booster3 site_a"
},
{
"from_node": "booster3 site_a",
"to_node": "Span1ac"
},
{
"from_node": "Span1ac",
"to_node": "preamp1 site_c"
},
{
"from_node": "preamp1 site_c",
"to_node": "roadm site_c"
},
{
"from_node": "roadm site_c",
"to_node": "booster1 site_c"
},
{
"from_node": "booster1 site_c",
"to_node": "Span1cb"
},
{
"from_node": "Span1cb",
"to_node": "preamp3 site_b"
},
{
"from_node": "preamp3 site_b",
"to_node": "roadm site_b"
},
{
"from_node": "roadm site_b",
"to_node": "booster3 site_b"
},
{
"from_node": "booster3 site_b",
"to_node": "Span1bc"
},
{
"from_node": "Span1bc",
"to_node": "preamp2 site_c"
},
{
"from_node": "preamp2 site_c",
"to_node": "roadm site_c"
},
{
"from_node": "roadm site_c",
"to_node": "booster2 site_c"
},
{
"from_node": "booster2 site_c",
"to_node": "Span1ca"
},
{
"from_node": "Span1ca",
"to_node": "preamp3 site_a"
},
{
"from_node": "preamp3 site_a",
"to_node": "roadm site_a"
}
]
}

View File

@@ -0,0 +1,319 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
transmission_main_example.py
============================
Main example for transmission simulation.
Reads from network JSON (by default, `edfa_example_network.json`)
'''
from gnpy.core.equipment import load_equipment, trx_mode_params
from gnpy.core.utils import db2lin, lin2db, write_csv
from argparse import ArgumentParser
from sys import exit
from pathlib import Path
from json import loads
from collections import Counter
from logging import getLogger, basicConfig, INFO, ERROR, DEBUG
from numpy import linspace, mean, log10
from matplotlib.pyplot import show, axis, figure, title, text
from networkx import (draw_networkx_nodes, draw_networkx_edges,
draw_networkx_labels, dijkstra_path)
from gnpy.core.network import load_network, build_network, save_network, load_sim_params, configure_network
from gnpy.core.elements import Transceiver, Fiber, RamanFiber, Edfa, Roadm
from gnpy.core.info import create_input_spectral_information, SpectralInformation, Channel, Power, Pref
from gnpy.core.request import Path_request, RequestParams, compute_constrained_path, propagate2
from gnpy.core.exceptions import ConfigurationError, EquipmentConfigError, NetworkTopologyError
import gnpy.core.ansi_escapes as ansi_escapes
logger = getLogger(__name__)
def plot_baseline(network):
edges = set(network.edges())
pos = {n: (n.lng, n.lat) for n in network.nodes()}
labels = {n: n.location.city for n in network.nodes() if isinstance(n, Transceiver)}
city_labels = set(labels.values())
for n in network.nodes():
if n.location.city and n.location.city not in city_labels:
labels[n] = n.location.city
city_labels.add(n.location.city)
label_pos = pos
fig = figure()
kwargs = {'figure': fig, 'pos': pos}
plot = draw_networkx_nodes(network, nodelist=network.nodes(), node_color='#ababab', **kwargs)
draw_networkx_edges(network, edgelist=edges, edge_color='#ababab', **kwargs)
draw_networkx_labels(network, labels=labels, font_size=14, **{**kwargs, 'pos': label_pos})
axis('off')
show()
def plot_results(network, path, source, destination, infos):
path_edges = set(zip(path[:-1], path[1:]))
edges = set(network.edges()) - path_edges
pos = {n: (n.lng, n.lat) for n in network.nodes()}
nodes = {}
for k, (x, y) in pos.items():
nodes.setdefault((round(x, 1), round(y, 1)), []).append(k)
labels = {n: n.location.city for n in network.nodes() if isinstance(n, Transceiver)}
city_labels = set(labels.values())
for n in network.nodes():
if n.location.city and n.location.city not in city_labels:
labels[n] = n.location.city
city_labels.add(n.location.city)
label_pos = pos
fig = figure()
kwargs = {'figure': fig, 'pos': pos}
all_nodes = [n for n in network.nodes() if n not in path]
plot = draw_networkx_nodes(network, nodelist=all_nodes, node_color='#ababab', node_size=50, **kwargs)
draw_networkx_nodes(network, nodelist=path, node_color='#ff0000', node_size=55, **kwargs)
draw_networkx_edges(network, edgelist=edges, edge_color='#ababab', **kwargs)
draw_networkx_edges(network, edgelist=path_edges, edge_color='#ff0000', **kwargs)
draw_networkx_labels(network, labels=labels, font_size=14, **{**kwargs, 'pos': label_pos})
title(f'Propagating from {source.loc.city} to {destination.loc.city}')
axis('off')
heading = 'Spectral Information\n\n'
textbox = text(0.85, 0.20, heading, fontsize=14, fontname='Ubuntu Mono',
verticalalignment='top', transform=fig.axes[0].transAxes,
bbox={'boxstyle': 'round', 'facecolor': 'wheat', 'alpha': 0.5})
msgs = {(x, y): heading + '\n\n'.join(str(n) for n in ns if n in path)
for (x, y), ns in nodes.items()}
def hover(event):
if event.xdata is None or event.ydata is None:
return
if fig.contains(event):
x, y = round(event.xdata, 1), round(event.ydata, 1)
if (x, y) in msgs:
textbox.set_text(msgs[x, y])
else:
textbox.set_text(heading)
fig.canvas.draw_idle()
fig.canvas.mpl_connect('motion_notify_event', hover)
show()
def main(network, equipment, source, destination, sim_params, req=None):
result_dicts = {}
network_data = [{
'network_name' : str(args.filename),
'source' : source.uid,
'destination' : destination.uid
}]
result_dicts.update({'network': network_data})
design_data = [{
'power_mode' : equipment['Span']['default'].power_mode,
'span_power_range' : equipment['Span']['default'].delta_power_range_db,
'design_pch' : equipment['SI']['default'].power_dbm,
'baud_rate' : equipment['SI']['default'].baud_rate
}]
result_dicts.update({'design': design_data})
simulation_data = []
result_dicts.update({'simulation results': simulation_data})
power_mode = equipment['Span']['default'].power_mode
print('\n'.join([f'Power mode is set to {power_mode}',
f'=> it can be modified in eqpt_config.json - Span']))
pref_ch_db = lin2db(req.power*1e3) #reference channel power / span (SL=20dB)
pref_total_db = pref_ch_db + lin2db(req.nb_channel) #reference total power / span (SL=20dB)
build_network(network, equipment, pref_ch_db, pref_total_db)
path = compute_constrained_path(network, req)
if len([s.length for s in path if isinstance(s, RamanFiber)]):
if sim_params is None:
print(f'{ansi_escapes.red}Invocation error:{ansi_escapes.reset} RamanFiber requires passing simulation params via --sim-params')
exit(1)
configure_network(network, sim_params)
spans = [s.length for s in path if isinstance(s, RamanFiber) or isinstance(s, Fiber)]
print(f'\nThere are {len(spans)} fiber spans over {sum(spans)/1000:.0f} km between {source.uid} and {destination.uid}')
print(f'\nNow propagating between {source.uid} and {destination.uid}:')
try:
p_start, p_stop, p_step = equipment['SI']['default'].power_range_db
p_num = abs(int(round((p_stop - p_start)/p_step))) + 1 if p_step != 0 else 1
power_range = list(linspace(p_start, p_stop, p_num))
except TypeError:
print('invalid power range definition in eqpt_config, should be power_range_db: [lower, upper, step]')
power_range = [0]
if not power_mode:
#power cannot be changed in gain mode
power_range = [0]
for dp_db in power_range:
req.power = db2lin(pref_ch_db + dp_db)*1e-3
if power_mode:
print(f'\nPropagating with input power = {ansi_escapes.cyan}{lin2db(req.power*1e3):.2f} dBm{ansi_escapes.reset}:')
else:
print(f'\nPropagating in {ansi_escapes.cyan}gain mode{ansi_escapes.reset}: power cannot be set manually')
infos = propagate2(path, req, equipment)
if len(power_range) == 1:
for elem in path:
print(elem)
if power_mode:
print(f'\nTransmission result for input power = {lin2db(req.power*1e3):.2f} dBm:')
else:
print(f'\nTransmission results:')
print(f' Final SNR total (0.1 nm): {ansi_escapes.cyan}{mean(destination.snr_01nm):.02f} dB{ansi_escapes.reset}')
else:
print(path[-1])
#print(f'\n !!!!!!!!!!!!!!!!! TEST POINT !!!!!!!!!!!!!!!!!!!!!')
#print(f'carriers ase output of {path[1]} =\n {list(path[1].carriers("out", "nli"))}')
# => use "in" or "out" parameter
# => use "nli" or "ase" or "signal" or "total" parameter
if power_mode:
simulation_data.append({
'Pch_dBm' : pref_ch_db + dp_db,
'OSNR_ASE_0.1nm' : round(mean(destination.osnr_ase_01nm),2),
'OSNR_ASE_signal_bw' : round(mean(destination.osnr_ase),2),
'SNR_nli_signal_bw' : round(mean(destination.osnr_nli),2),
'SNR_total_signal_bw' : round(mean(destination.snr),2)
})
else:
simulation_data.append({
'gain_mode' : 'power canot be set',
'OSNR_ASE_0.1nm' : round(mean(destination.osnr_ase_01nm),2),
'OSNR_ASE_signal_bw' : round(mean(destination.osnr_ase),2),
'SNR_nli_signal_bw' : round(mean(destination.osnr_nli),2),
'SNR_total_signal_bw' : round(mean(destination.snr),2)
})
write_csv(result_dicts, 'simulation_result.csv')
return path, infos
parser = ArgumentParser()
parser.add_argument('-e', '--equipment', type=Path,
default=Path(__file__).parent / 'eqpt_config.json')
parser.add_argument('--sim-params', type=Path,
default=None, help='Path to the JSON containing simulation parameters (required for Raman)')
parser.add_argument('--show-channels', action='store_true', help='Show final per-channel OSNR summary')
parser.add_argument('-pl', '--plot', action='store_true')
parser.add_argument('-v', '--verbose', action='count', default=0, help='increases verbosity for each occurence')
parser.add_argument('-l', '--list-nodes', action='store_true', help='list all transceiver nodes')
parser.add_argument('-po', '--power', default=0, help='channel ref power in dBm')
parser.add_argument('-names', '--names-matching', action='store_true', help='display network names that are closed matches')
parser.add_argument('filename', nargs='?', type=Path,
default=Path(__file__).parent / 'edfa_example_network.json')
parser.add_argument('source', nargs='?', help='source node')
parser.add_argument('destination', nargs='?', help='destination node')
if __name__ == '__main__':
args = parser.parse_args()
basicConfig(level={0: ERROR, 1: INFO, 2: DEBUG}.get(args.verbose, DEBUG))
try:
equipment = load_equipment(args.equipment)
network = load_network(args.filename, equipment, args.names_matching)
sim_params = load_sim_params(args.sim_params) if args.sim_params is not None else None
except EquipmentConfigError as e:
print(f'{ansi_escapes.red}Configuration error in the equipment library:{ansi_escapes.reset} {e}')
exit(1)
except NetworkTopologyError as e:
print(f'{ansi_escapes.red}Invalid network definition:{ansi_escapes.reset} {e}')
exit(1)
except ConfigurationError as e:
print(f'{ansi_escapes.red}Configuration error:{ansi_escapes.reset} {e}')
exit(1)
if args.plot:
plot_baseline(network)
transceivers = {n.uid: n for n in network.nodes() if isinstance(n, Transceiver)}
if not transceivers:
exit('Network has no transceivers!')
if len(transceivers) < 2:
exit('Network has only one transceiver!')
if args.list_nodes:
for uid in transceivers:
print(uid)
exit()
#First try to find exact match if source/destination provided
if args.source:
source = transceivers.pop(args.source, None)
valid_source = True if source else False
else:
source = None
logger.info('No source node specified: picking random transceiver')
if args.destination:
destination = transceivers.pop(args.destination, None)
valid_destination = True if destination else False
else:
destination = None
logger.info('No destination node specified: picking random transceiver')
#If no exact match try to find partial match
if args.source and not source:
#TODO code a more advanced regex to find nodes match
source = next((transceivers.pop(uid) for uid in transceivers \
if args.source.lower() in uid.lower()), None)
if args.destination and not destination:
#TODO code a more advanced regex to find nodes match
destination = next((transceivers.pop(uid) for uid in transceivers \
if args.destination.lower() in uid.lower()), None)
#If no partial match or no source/destination provided pick random
if not source:
source = list(transceivers.values())[0]
del transceivers[source.uid]
if not destination:
destination = list(transceivers.values())[0]
logger.info(f'source = {args.source!r}')
logger.info(f'destination = {args.destination!r}')
params = {}
params['request_id'] = 0
params['trx_type'] = ''
params['trx_mode'] = ''
params['source'] = source.uid
params['destination'] = destination.uid
params['bidir'] = False
params['nodes_list'] = [destination.uid]
params['loose_list'] = ['strict']
params['format'] = ''
params['path_bandwidth'] = 0
trx_params = trx_mode_params(equipment)
if args.power:
trx_params['power'] = db2lin(float(args.power))*1e-3
params.update(trx_params)
req = Path_request(**params)
path, infos = main(network, equipment, source, destination, sim_params, req)
save_network(args.filename, network)
if args.show_channels:
print('\nThe total SNR per channel at the end of the line is:')
print('{:>5}{:>26}{:>26}{:>28}{:>28}{:>28}' \
.format('Ch. #', 'Channel frequency (THz)', 'Channel power (dBm)', 'OSNR ASE (signal bw, dB)', 'SNR NLI (signal bw, dB)', 'SNR total (signal bw, dB)'))
for final_carrier, ch_osnr, ch_snr_nl, ch_snr in zip(infos[path[-1]][1].carriers, path[-1].osnr_ase, path[-1].osnr_nli, path[-1].snr):
ch_freq = final_carrier.frequency * 1e-12
ch_power = lin2db(final_carrier.power.signal*1e3)
print('{:5}{:26.2f}{:26.2f}{:28.2f}{:28.2f}{:28.2f}' \
.format(final_carrier.channel_number, round(ch_freq, 2), round(ch_power, 2), round(ch_osnr, 2), round(ch_snr_nl, 2), round(ch_snr, 2)))
if not args.source:
print(f'\n(No source node specified: picked {source.uid})')
elif not valid_source:
print(f'\n(Invalid source node {args.source!r} replaced with {source.uid})')
if not args.destination:
print(f'\n(No destination node specified: picked {destination.uid})')
elif not valid_destination:
print(f'\n(Invalid destination node {args.destination!r} replaced with {destination.uid})')
if args.plot:
plot_results(network, path, source, destination, infos)

View File

@@ -14,14 +14,14 @@ See: draft-ietf-teas-yang-path-computation-01.txt
from argparse import ArgumentParser
from pathlib import Path
from json import loads
from gnpy.tools.json_io import load_equipment
from gnpy.topology.request import jsontocsv
from gnpy.core.equipment import load_equipment
from gnpy.core.request import jsontocsv
parser = ArgumentParser(description='Converting JSON path results into a CSV')
parser.add_argument('filename', type=Path)
parser.add_argument('output_filename', type=Path)
parser.add_argument('eqpt_filename', nargs='?', type=Path, default=Path(__file__).parent / 'eqpt_config.json')
parser = ArgumentParser(description = 'A function that writes json path results in an excel sheet.')
parser.add_argument('filename', nargs='?', type = Path)
parser.add_argument('output_filename', nargs='?', type = Path)
parser.add_argument('eqpt_filename', nargs='?', type = Path, default=Path(__file__).parent / 'eqpt_config.json')
if __name__ == '__main__':
args = parser.parse_args()
@@ -32,4 +32,5 @@ if __name__ == '__main__':
json_data = loads(f.read())
equipment = load_equipment(args.eqpt_filename)
print(f'Writing in {args.output_filename}')
jsontocsv(json_data, equipment, file)
jsontocsv(json_data,equipment,file)

View File

@@ -1,8 +0,0 @@
'''
GNPy is an open-source, community-developed library for building route planning and optimization tools in real-world mesh optical networks. It is based on the Gaussian Noise Model.
Signal propagation is implemented in :py:mod:`.core`.
Path finding and spectrum assignment is in :py:mod:`.topology`.
Various tools and auxiliary code, including the JSON I/O handling, is in
:py:mod:`.tools`.
'''

View File

@@ -1,9 +1,30 @@
'''
Simulation of signal propagation in the DWDM network
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
########################################################################
# _____ ___ ____ ____ ____ _____ #
# |_ _|_ _| _ \ | _ \/ ___|| ____| #
# | | | || |_) | | |_) \___ \| _| #
# | | | || __/ | __/ ___) | |___ #
# |_| |___|_| |_| |____/|_____| #
# #
# == Physical Simulation Environment == #
# #
########################################################################
Optical signals, as defined via :class:`.info.SpectralInformation`, enter
:py:mod:`.elements` which compute how these signals are affected as they travel
through the :py:mod:`.network`.
The simulation is controlled via :py:mod:`.parameters` and implemented mainly
via :py:mod:`.science_utils`.
'''
gnpy route planning and optimization library
============================================
gnpy is a route planning and optimization library, written in Python, for
operators of large-scale mesh optical networks.
:copyright: © 2018, Telecom Infra Project
:license: BSD 3-Clause, see LICENSE for more details.
'''
from . import elements
from .execute import *
from .network import *
from .utils import *

View File

@@ -9,7 +9,5 @@ A random subset of ANSI terminal escape codes for colored messages
'''
red = '\x1b[1;31;40m'
blue = '\x1b[1;34;40m'
cyan = '\x1b[1;36;40m'
yellow = '\x1b[1;33;40m'
reset = '\x1b[0m'

631
gnpy/core/convert.py Executable file
View File

@@ -0,0 +1,631 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
gnpy.core.convert
=================
This module contains utilities for converting between XLS and JSON.
The input XLS file must contain sheets named "Nodes" and "Links".
It may optionally contain a sheet named "Eqpt".
In the "Nodes" sheet, only the "City" column is mandatory. The column "Type"
can be determined automatically given the topology (e.g., if degree 2, ILA;
otherwise, ROADM.) Incorrectly specified types (e.g., ILA for node of
degree ≠ 2) will be automatically corrected.
In the "Links" sheet, only the first three columns ("Node A", "Node Z" and
"east Distance (km)") are mandatory. Missing "west" information is copied from
the "east" information so that it is possible to input undirected data.
"""
from sys import exit
try:
from xlrd import open_workbook
except ModuleNotFoundError:
exit('Required: `pip install xlrd`')
from argparse import ArgumentParser
from collections import namedtuple, Counter, defaultdict
from itertools import chain
from json import dumps
from pathlib import Path
from difflib import get_close_matches
from gnpy.core.utils import silent_remove
from gnpy.core.exceptions import NetworkTopologyError
import time
all_rows = lambda sh, start=0: (sh.row(x) for x in range(start, sh.nrows))
class Node(object):
def __init__(self, **kwargs):
super(Node, self).__init__()
self.update_attr(kwargs)
def update_attr(self, kwargs):
clean_kwargs = {k:v for k,v in kwargs.items() if v !=''}
for k,v in self.default_values.items():
v = clean_kwargs.get(k,v)
setattr(self, k, v)
default_values = \
{
'city': '',
'state': '',
'country': '',
'region': '',
'latitude': 0,
'longitude': 0,
'node_type': 'ILA',
'booster_restriction' : '',
'preamp_restriction' : ''
}
class Link(object):
"""attribtes from west parse_ept_headers dict
+node_a, node_z, west_fiber_con_in, east_fiber_con_in
"""
def __init__(self, **kwargs):
super(Link, self).__init__()
self.update_attr(kwargs)
self.distance_units = 'km'
def update_attr(self, kwargs):
clean_kwargs = {k:v for k,v in kwargs.items() if v !=''}
for k,v in self.default_values.items():
v = clean_kwargs.get(k,v)
setattr(self, k, v)
k = 'west' + k.split('east')[-1]
v = clean_kwargs.get(k,v)
setattr(self, k, v)
def __eq__(self, link):
return (self.from_city == link.from_city and self.to_city == link.to_city) \
or (self.from_city == link.to_city and self.to_city == link.from_city)
default_values = \
{
'from_city': '',
'to_city': '',
'east_distance': 80,
'east_fiber': 'SSMF',
'east_lineic': 0.2,
'east_con_in': None,
'east_con_out': None,
'east_pmd': 0.1,
'east_cable': ''
}
class Eqpt(object):
def __init__(self, **kwargs):
super(Eqpt, self).__init__()
self.update_attr(kwargs)
def update_attr(self, kwargs):
clean_kwargs = {k:v for k,v in kwargs.items() if v !=''}
for k,v in self.default_values.items():
v_east = clean_kwargs.get(k,v)
setattr(self, k, v_east)
k = 'west' + k.split('east')[-1]
v_west = clean_kwargs.get(k,v)
setattr(self, k, v_west)
default_values = \
{
'from_city': '',
'to_city': '',
'east_amp_type': '',
'east_att_in': 0,
'east_amp_gain': None,
'east_amp_dp': None,
'east_tilt': 0,
'east_att_out': None
}
def read_header(my_sheet, line, slice_):
""" return the list of headers !:= ''
header_i = [(header, header_column_index), ...]
in a {line, slice1_x, slice_y} range
"""
Param_header = namedtuple('Param_header', 'header colindex')
try:
header = [x.value.strip() for x in my_sheet.row_slice(line, slice_[0], slice_[1])]
header_i = [Param_header(header,i+slice_[0]) for i, header in enumerate(header) if header != '']
except Exception:
header_i = []
if header_i != [] and header_i[-1].colindex != slice_[1]:
header_i.append(Param_header('',slice_[1]))
return header_i
def read_slice(my_sheet, line, slice_, header):
"""return the slice range of a given header
in a defined range {line, slice_x, slice_y}"""
header_i = read_header(my_sheet, line, slice_)
slice_range = (-1,-1)
if header_i != []:
try:
slice_range = next((h.colindex,header_i[i+1].colindex) \
for i,h in enumerate(header_i) if header in h.header)
except Exception:
pass
return slice_range
def parse_headers(my_sheet, input_headers_dict, headers, start_line, slice_in):
"""return a dict of header_slice
key = column index
value = header name"""
for h0 in input_headers_dict:
slice_out = read_slice(my_sheet, start_line, slice_in, h0)
iteration = 1
while slice_out == (-1,-1) and iteration < 10:
#try next lines
#print(h0, iteration)
slice_out = read_slice(my_sheet, start_line+iteration, slice_in, h0)
iteration += 1
if slice_out == (-1, -1):
if h0 in ('east', 'Node A', 'Node Z', 'City') :
print(f'\x1b[1;31;40m'+f'CRITICAL: missing _{h0}_ header: EXECUTION ENDS'+ '\x1b[0m')
exit()
else:
print(f'missing header {h0}')
elif not isinstance(input_headers_dict[h0], dict):
headers[slice_out[0]] = input_headers_dict[h0]
else:
headers = parse_headers(my_sheet, input_headers_dict[h0], headers, start_line+1, slice_out)
if headers == {}:
print(f'\x1b[1;31;40m'+f'CRITICAL ERROR: could not find any header to read _ ABORT'+ '\x1b[0m')
exit()
return headers
def parse_row(row, headers):
#print([label for label in ept.values()])
#print([i for i in ept.keys()])
#print(row[i for i in ept.keys()])
return {f: r.value for f, r in \
zip([label for label in headers.values()], [row[i] for i in headers])}
#if r.ctype != XL_CELL_EMPTY}
def parse_sheet(my_sheet, input_headers_dict, header_line, start_line, column):
headers = parse_headers(my_sheet, input_headers_dict, {}, header_line, (0,column))
for row in all_rows(my_sheet, start=start_line):
yield parse_row(row[0: column], headers)
def sanity_check(nodes, links, nodes_by_city, links_by_city, eqpts_by_city):
duplicate_links = []
for l1 in links:
for l2 in links:
if l1 is not l2 and l1 == l2 and l2 not in duplicate_links:
print(f'\nWARNING\n \
link {l1.from_city}-{l1.to_city} is duplicate \
\nthe 1st duplicate link will be removed but you should check Links sheet input')
duplicate_links.append(l1)
#if duplicate_links != []:
#time.sleep(3)
for l in duplicate_links:
links.remove(l)
try :
test_nodes = [n for n in nodes_by_city if not n in links_by_city]
test_links = [n for n in links_by_city if not n in nodes_by_city]
test_eqpts = [n for n in eqpts_by_city if not n in nodes_by_city]
assert (test_nodes == [] or test_nodes == [''])\
and (test_links == [] or test_links ==[''])\
and (test_eqpts == [] or test_eqpts ==[''])
except AssertionError:
print(f'CRITICAL error: \nNames in Nodes and Links sheets do no match, check:\
\n{test_nodes} in Nodes sheet\
\n{test_links} in Links sheet\
\n{test_eqpts} in Eqpt sheet')
exit(1)
for city,link in links_by_city.items():
if nodes_by_city[city].node_type.lower()=='ila' and len(link) != 2:
#wrong input: ILA sites can only be Degree 2
# => correct to make it a ROADM and remove entry in links_by_city
#TODO : put in log rather than print
print(f'invalid node type ({nodes_by_city[city].node_type})\
specified in {city}, replaced by ROADM')
nodes_by_city[city].node_type = 'ROADM'
for n in nodes:
if n.city==city:
n.node_type='ROADM'
return nodes, links
def convert_file(input_filename, names_matching=False, filter_region=[]):
nodes, links, eqpts = parse_excel(input_filename)
if filter_region:
nodes = [n for n in nodes if n.region.lower() in filter_region]
cities = {n.city for n in nodes}
links = [lnk for lnk in links if lnk.from_city in cities and
lnk.to_city in cities]
cities = {lnk.from_city for lnk in links} | {lnk.to_city for lnk in links}
nodes = [n for n in nodes if n.city in cities]
global nodes_by_city
nodes_by_city = {n.city: n for n in nodes}
#create matching dictionary for node name mismatch analysis
cities = {''.join(c.strip() for c in n.city.split('C+L')).lower(): n.city for n in nodes}
cities_to_match = [k for k in cities]
city_match_dic = defaultdict(list)
for city in cities:
if city in cities_to_match:
cities_to_match.remove(city)
matches = get_close_matches(city, cities_to_match, 4, 0.85)
for m in matches:
city_match_dic[cities[city]].append(cities[m])
#check lower case/upper case
for city in nodes_by_city:
for match_city in nodes_by_city:
if match_city.lower() == city.lower() and match_city != city:
city_match_dic[city].append(match_city)
if names_matching:
print('\ncity match dictionary:',city_match_dic)
with open('name_match_dictionary.json', 'w', encoding='utf-8') as city_match_dic_file:
city_match_dic_file.write(dumps(city_match_dic, indent=2, ensure_ascii=False))
global links_by_city
links_by_city = defaultdict(list)
for link in links:
links_by_city[link.from_city].append(link)
links_by_city[link.to_city].append(link)
global eqpts_by_city
eqpts_by_city = defaultdict(list)
for eqpt in eqpts:
eqpts_by_city[eqpt.from_city].append(eqpt)
nodes, links = sanity_check(nodes, links, nodes_by_city, links_by_city, eqpts_by_city)
data = {
'elements':
[{'uid': f'trx {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Transceiver'}
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm'] +
[{'uid': f'roadm {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Roadm'}
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm' \
and x.booster_restriction == '' and x.preamp_restriction == ''] +
[{'uid': f'roadm {x.city}',
'params' : {
'restrictions': {
'preamp_variety_list': silent_remove(x.preamp_restriction.split(' | '),''),
'booster_variety_list': silent_remove(x.booster_restriction.split(' | '),'')
}
},
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Roadm'}
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm' and \
(x.booster_restriction != '' or x.preamp_restriction != '')] +
[{'uid': f'west fused spans in {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Fused'}
for x in nodes_by_city.values() if x.node_type.lower() == 'fused'] +
[{'uid': f'east fused spans in {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Fused'}
for x in nodes_by_city.values() if x.node_type.lower() == 'fused'] +
[{'uid': f'fiber ({x.from_city} \u2192 {x.to_city})-{x.east_cable}',
'metadata': {'location': midpoint(nodes_by_city[x.from_city],
nodes_by_city[x.to_city])},
'type': 'Fiber',
'type_variety': x.east_fiber,
'params': {'length': round(x.east_distance, 3),
'length_units': x.distance_units,
'loss_coef': x.east_lineic,
'con_in':x.east_con_in,
'con_out':x.east_con_out}
}
for x in links] +
[{'uid': f'fiber ({x.to_city} \u2192 {x.from_city})-{x.west_cable}',
'metadata': {'location': midpoint(nodes_by_city[x.from_city],
nodes_by_city[x.to_city])},
'type': 'Fiber',
'type_variety': x.west_fiber,
'params': {'length': round(x.west_distance, 3),
'length_units': x.distance_units,
'loss_coef': x.west_lineic,
'con_in':x.west_con_in,
'con_out':x.west_con_out}
} # missing ILA construction
for x in links] +
[{'uid': f'east edfa in {e.from_city} to {e.to_city}',
'metadata': {'location': {'city': nodes_by_city[e.from_city].city,
'region': nodes_by_city[e.from_city].region,
'latitude': nodes_by_city[e.from_city].latitude,
'longitude': nodes_by_city[e.from_city].longitude}},
'type': 'Edfa',
'type_variety': e.east_amp_type,
'operational': {'gain_target': e.east_amp_gain,
'delta_p': e.east_amp_dp,
'tilt_target': e.east_tilt,
'out_voa' : e.east_att_out}
}
for e in eqpts if (e.east_amp_type.lower() != '' and \
e.east_amp_type.lower() != 'fused')] +
[{'uid': f'west edfa in {e.from_city} to {e.to_city}',
'metadata': {'location': {'city': nodes_by_city[e.from_city].city,
'region': nodes_by_city[e.from_city].region,
'latitude': nodes_by_city[e.from_city].latitude,
'longitude': nodes_by_city[e.from_city].longitude}},
'type': 'Edfa',
'type_variety': e.west_amp_type,
'operational': {'gain_target': e.west_amp_gain,
'delta_p': e.west_amp_dp,
'tilt_target': e.west_tilt,
'out_voa' : e.west_att_out}
}
for e in eqpts if (e.west_amp_type.lower() != '' and \
e.west_amp_type.lower() != 'fused')] +
# fused edfa variety is a hack to indicate that there should not be
# booster amplifier out the roadm.
# If user specifies ILA in Nodes sheet and fused in Eqpt sheet, then assumes that
# this is a fused nodes.
[{'uid': f'east edfa in {e.from_city} to {e.to_city}',
'metadata': {'location': {'city': nodes_by_city[e.from_city].city,
'region': nodes_by_city[e.from_city].region,
'latitude': nodes_by_city[e.from_city].latitude,
'longitude': nodes_by_city[e.from_city].longitude}},
'type': 'Fused',
'params': {'loss': 0}
}
for e in eqpts if e.east_amp_type.lower() == 'fused'] +
[{'uid': f'west edfa in {e.from_city} to {e.to_city}',
'metadata': {'location': {'city': nodes_by_city[e.from_city].city,
'region': nodes_by_city[e.from_city].region,
'latitude': nodes_by_city[e.from_city].latitude,
'longitude': nodes_by_city[e.from_city].longitude}},
'type': 'Fused',
'params': {'loss': 0}
}
for e in eqpts if e.west_amp_type.lower() == 'fused'],
'connections':
list(chain.from_iterable([eqpt_connection_by_city(n.city)
for n in nodes]))
+
list(chain.from_iterable(zip(
[{'from_node': f'trx {x.city}',
'to_node': f'roadm {x.city}'}
for x in nodes_by_city.values() if x.node_type.lower()=='roadm'],
[{'from_node': f'roadm {x.city}',
'to_node': f'trx {x.city}'}
for x in nodes_by_city.values() if x.node_type.lower()=='roadm'])))
}
suffix_filename = str(input_filename.suffixes[0])
full_input_filename = str(input_filename)
split_filename = [full_input_filename[0:len(full_input_filename)-len(suffix_filename)] , suffix_filename[1:]]
output_json_file_name = split_filename[0]+'.json'
with open(output_json_file_name, 'w', encoding='utf-8') as edfa_json_file:
edfa_json_file.write(dumps(data, indent=2, ensure_ascii=False))
return output_json_file_name
def parse_excel(input_filename):
link_headers = \
{ 'Node A': 'from_city',
'Node Z': 'to_city',
'east':{
'Distance (km)': 'east_distance',
'Fiber type': 'east_fiber',
'lineic att': 'east_lineic',
'Con_in': 'east_con_in',
'Con_out': 'east_con_out',
'PMD': 'east_pmd',
'Cable id': 'east_cable'
},
'west':{
'Distance (km)': 'west_distance',
'Fiber type': 'west_fiber',
'lineic att': 'west_lineic',
'Con_in': 'west_con_in',
'Con_out': 'west_con_out',
'PMD': 'west_pmd',
'Cable id': 'west_cable'
}
}
node_headers = \
{ 'City': 'city',
'State': 'state',
'Country': 'country',
'Region': 'region',
'Latitude': 'latitude',
'Longitude': 'longitude',
'Type': 'node_type',
'Booster_restriction': 'booster_restriction',
'Preamp_restriction': 'preamp_restriction'
}
eqpt_headers = \
{ 'Node A': 'from_city',
'Node Z': 'to_city',
'east':{
'amp type': 'east_amp_type',
'att_in': 'east_att_in',
'amp gain': 'east_amp_gain',
'delta p': 'east_amp_dp',
'tilt': 'east_tilt',
'att_out': 'east_att_out'
},
'west':{
'amp type': 'west_amp_type',
'att_in': 'west_att_in',
'amp gain': 'west_amp_gain',
'delta p': 'west_amp_dp',
'tilt': 'west_tilt',
'att_out': 'west_att_out'
}
}
with open_workbook(input_filename) as wb:
nodes_sheet = wb.sheet_by_name('Nodes')
links_sheet = wb.sheet_by_name('Links')
try:
eqpt_sheet = wb.sheet_by_name('Eqpt')
except Exception:
#eqpt_sheet is optional
eqpt_sheet = None
nodes = []
for node in parse_sheet(nodes_sheet, node_headers, NODES_LINE, NODES_LINE+1, NODES_COLUMN):
nodes.append(Node(**node))
expected_node_types = {'ROADM', 'ILA', 'FUSED'}
for n in nodes:
if n.node_type not in expected_node_types:
n.node_type = 'ILA'
links = []
for link in parse_sheet(links_sheet, link_headers, LINKS_LINE, LINKS_LINE+2, LINKS_COLUMN):
links.append(Link(**link))
#print('\n', [l.__dict__ for l in links])
eqpts = []
if eqpt_sheet != None:
for eqpt in parse_sheet(eqpt_sheet, eqpt_headers, EQPTS_LINE, EQPTS_LINE+2, EQPTS_COLUMN):
eqpts.append(Eqpt(**eqpt))
# sanity check
all_cities = Counter(n.city for n in nodes)
if len(all_cities) != len(nodes):
raise ValueError(f'Duplicate city: {all_cities}')
bad_links = []
for lnk in links:
if lnk.from_city not in all_cities or lnk.to_city not in all_cities:
bad_links.append([lnk.from_city, lnk.to_city])
if bad_links:
raise NetworkTopologyError(f'Bad link(s): {bad_links}.')
return nodes, links, eqpts
def eqpt_connection_by_city(city_name):
other_cities = fiber_dest_from_source(city_name)
subdata = []
if nodes_by_city[city_name].node_type.lower() in {'ila', 'fused'}:
# Then len(other_cities) == 2
direction = ['west', 'east']
for i in range(2):
from_ = fiber_link(other_cities[i], city_name)
in_ = eqpt_in_city_to_city(city_name, other_cities[0],direction[i])
to_ = fiber_link(city_name, other_cities[1-i])
subdata += connect_eqpt(from_, in_, to_)
elif nodes_by_city[city_name].node_type.lower() == 'roadm':
for other_city in other_cities:
from_ = f'roadm {city_name}'
in_ = eqpt_in_city_to_city(city_name, other_city)
to_ = fiber_link(city_name, other_city)
subdata += connect_eqpt(from_, in_, to_)
from_ = fiber_link(other_city, city_name)
in_ = eqpt_in_city_to_city(city_name, other_city, "west")
to_ = f'roadm {city_name}'
subdata += connect_eqpt(from_, in_, to_)
return subdata
def connect_eqpt(from_, in_, to_):
connections = []
if in_ !='':
connections = [{'from_node': from_, 'to_node': in_},
{'from_node': in_, 'to_node': to_}]
else:
connections = [{'from_node': from_, 'to_node': to_}]
return connections
def eqpt_in_city_to_city(in_city, to_city, direction='east'):
rev_direction = 'west' if direction == 'east' else 'east'
amp_direction = f'{direction}_amp_type'
amp_rev_direction = f'{rev_direction}_amp_type'
return_eqpt = ''
if in_city in eqpts_by_city:
for e in eqpts_by_city[in_city]:
if nodes_by_city[in_city].node_type.lower() == 'roadm':
if e.to_city == to_city and getattr(e, amp_direction) != '':
return_eqpt = f'{direction} edfa in {e.from_city} to {e.to_city}'
elif nodes_by_city[in_city].node_type.lower() == 'ila':
if e.to_city != to_city:
direction = rev_direction
amp_direction = amp_rev_direction
if getattr(e, amp_direction) != '':
return_eqpt = f'{direction} edfa in {e.from_city} to {e.to_city}'
if nodes_by_city[in_city].node_type.lower() == 'fused':
return_eqpt = f'{direction} fused spans in {in_city}'
return return_eqpt
def fiber_dest_from_source(city_name):
destinations = []
links_from_city = links_by_city[city_name]
for l in links_from_city:
if l.from_city == city_name:
destinations.append(l.to_city)
else:
destinations.append(l.from_city)
return destinations
def fiber_link(from_city, to_city):
source_dest = (from_city, to_city)
link = links_by_city[from_city]
l = next(l for l in link if l.from_city in source_dest and l.to_city in source_dest)
if l.from_city == from_city:
fiber = f'fiber ({l.from_city} \u2192 {l.to_city})-{l.east_cable}'
else:
fiber = f'fiber ({l.to_city} \u2192 {l.from_city})-{l.west_cable}'
return fiber
def midpoint(city_a, city_b):
lats = city_a.latitude, city_b.latitude
longs = city_a.longitude, city_b.longitude
try:
result = {
'latitude': sum(lats) / 2,
'longitude': sum(longs) / 2
}
except :
result = {
'latitude': 0,
'longitude': 0
}
return result
#output_json_file_name = 'coronet_conus_example.json'
#TODO get column size automatically from tupple size
NODES_COLUMN = 10
NODES_LINE = 4
LINKS_COLUMN = 16
LINKS_LINE = 3
EQPTS_LINE = 3
EQPTS_COLUMN = 14
parser = ArgumentParser()
parser.add_argument('workbook', nargs='?', type=Path , default='meshTopologyExampleV2.xls')
parser.add_argument('-f', '--filter-region', action='append', default=[])
if __name__ == '__main__':
args = parser.parse_args()
convert_file(args.workbook, args.filter_region)

File diff suppressed because it is too large Load Diff

View File

@@ -8,9 +8,261 @@ gnpy.core.equipment
This module contains functionality for specifying equipment.
'''
from gnpy.core.utils import automatic_nch, db2lin
from numpy import clip, polyval
from operator import itemgetter
from math import isclose
from pathlib import Path
from json import load
from gnpy.core.utils import lin2db, db2lin, load_json
from collections import namedtuple
from gnpy.core.elements import Edfa
from gnpy.core.exceptions import EquipmentConfigError
import time
Model_vg = namedtuple('Model_vg', 'nf1 nf2 delta_p')
Model_fg = namedtuple('Model_fg', 'nf0')
Model_openroadm = namedtuple('Model_openroadm', 'nf_coef')
Model_hybrid = namedtuple('Model_hybrid', 'nf_ram gain_ram edfa_variety')
Model_dual_stage = namedtuple('Model_dual_stage', 'preamp_variety booster_variety')
class common:
def update_attr(self, default_values, kwargs, name):
clean_kwargs = {k:v for k, v in kwargs.items() if v != ''}
for k, v in default_values.items():
setattr(self, k, clean_kwargs.get(k, v))
if k not in clean_kwargs and name != 'Amp':
print(f'\x1b[1;31;40m'+
f'\n WARNING missing {k} attribute in eqpt_config.json[{name}]'+
f'\n default value is {k} = {v}'+
f'\x1b[0m')
time.sleep(1)
class SI(common):
default_values =\
{
"f_min": 191.35e12,
"f_max": 196.1e12,
"baud_rate": 32e9,
"spacing": 50e9,
"power_dbm": 0,
"power_range_db": [0, 0, 0.5],
"roll_off": 0.15,
"tx_osnr": 45,
"sys_margins": 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'SI')
class Span(common):
default_values = \
{
'power_mode': True,
'delta_power_range_db': None,
'max_fiber_lineic_loss_for_raman': 0.25,
'target_extended_gain': 2.5,
'max_length': 150,
'length_units': 'km',
'max_loss': None,
'padding': 10,
'EOL': 0,
'con_in': 0,
'con_out': 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Span')
class Roadm(common):
default_values = \
{
'target_pch_out_db': -17,
'add_drop_osnr': 100,
'restrictions': {
'preamp_variety_list':[],
'booster_variety_list':[]
}
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Roadm')
class Transceiver(common):
default_values = \
{
'type_variety': None,
'frequency': None,
'mode': {}
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Transceiver')
class Fiber(common):
default_values = \
{
'type_variety': '',
'dispersion': None,
'gamma': 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Fiber')
class RamanFiber(common):
default_values = \
{
'type_variety': '',
'dispersion': None,
'gamma': 0,
'raman_efficiency': None
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'RamanFiber')
for param in ('cr', 'frequency_offset'):
if param not in self.raman_efficiency:
raise EquipmentConfigError(f'RamanFiber.raman_efficiency: missing "{param}" parameter')
if self.raman_efficiency['frequency_offset'] != sorted(self.raman_efficiency['frequency_offset']):
raise EquipmentConfigError(f'RamanFiber.raman_efficiency.frequency_offset is not sorted')
class Amp(common):
default_values = \
{
'f_min': 191.35e12,
'f_max': 196.1e12,
'type_variety': '',
'type_def': '',
'gain_flatmax': None,
'gain_min': None,
'p_max': None,
'nf_model': None,
'dual_stage_model': None,
'nf_fit_coeff': None,
'nf_ripple': None,
'dgt': None,
'gain_ripple': None,
'out_voa_auto': False,
'allowed_for_design': False,
'raman': False
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Amp')
@classmethod
def from_json(cls, filename, **kwargs):
config = Path(filename).parent / 'default_edfa_config.json'
type_variety = kwargs['type_variety']
type_def = kwargs.get('type_def', 'variable_gain') # default compatibility with older json eqpt files
nf_def = None
dual_stage_def = None
if type_def == 'fixed_gain':
try:
nf0 = kwargs.pop('nf0')
except KeyError: #nf0 is expected for a fixed gain amp
raise EquipmentConfigError(f'missing nf0 value input for amplifier: {type_variety} in equipment config')
for k in ('nf_min', 'nf_max'):
try:
del kwargs[k]
except KeyError:
pass
nf_def = Model_fg(nf0)
elif type_def == 'advanced_model':
config = Path(filename).parent / kwargs.pop('advanced_config_from_json')
elif type_def == 'variable_gain':
gain_min, gain_max = kwargs['gain_min'], kwargs['gain_flatmax']
try: #nf_min and nf_max are expected for a variable gain amp
nf_min = kwargs.pop('nf_min')
nf_max = kwargs.pop('nf_max')
except KeyError:
raise EquipmentConfigError(f'missing nf_min or nf_max value input for amplifier: {type_variety} in equipment config')
try: #remove all remaining nf inputs
del kwargs['nf0']
except KeyError: pass #nf0 is not needed for variable gain amp
nf1, nf2, delta_p = nf_model(type_variety, gain_min, gain_max, nf_min, nf_max)
nf_def = Model_vg(nf1, nf2, delta_p)
elif type_def == 'openroadm':
try:
nf_coef = kwargs.pop('nf_coef')
except KeyError: #nf_coef is expected for openroadm amp
raise EquipmentConfigError(f'missing nf_coef input for amplifier: {type_variety} in equipment config')
nf_def = Model_openroadm(nf_coef)
elif type_def == 'dual_stage':
try: #nf_ram and gain_ram are expected for a hybrid amp
preamp_variety = kwargs.pop('preamp_variety')
booster_variety = kwargs.pop('booster_variety')
except KeyError:
raise EquipmentConfigError(f'missing preamp/booster variety input for amplifier: {type_variety} in equipment config')
dual_stage_def = Model_dual_stage(preamp_variety, booster_variety)
with open(config, encoding='utf-8') as f:
json_data = load(f)
return cls(**{**kwargs, **json_data,
'nf_model': nf_def, 'dual_stage_model': dual_stage_def})
def nf_model(type_variety, gain_min, gain_max, nf_min, nf_max):
if nf_min < -10:
raise EquipmentConfigError(f'Invalid nf_min value {nf_min!r} for amplifier {type_variety}')
if nf_max < -10:
raise EquipmentConfigError(f'Invalid nf_max value {nf_max!r} for amplifier {type_variety}')
# NF estimation model based on nf_min and nf_max
# delta_p: max power dB difference between first and second stage coils
# dB g1a: first stage gain - internal VOA attenuation
# nf1, nf2: first and second stage coils
# calculated by solving nf_{min,max} = nf1 + nf2 / g1a{min,max}
delta_p = 5
g1a_min = gain_min - (gain_max - gain_min) - delta_p
g1a_max = gain_max - delta_p
nf2 = lin2db((db2lin(nf_min) - db2lin(nf_max)) /
(1/db2lin(g1a_max) - 1/db2lin(g1a_min)))
nf1 = lin2db(db2lin(nf_min) - db2lin(nf2)/db2lin(g1a_max))
if nf1 < 4:
raise EquipmentConfigError(f'First coil value too low {nf1} for amplifier {type_variety}')
# Check 1 dB < delta_p < 6 dB to ensure nf_min and nf_max values make sense.
# There shouldn't be high nf differences between the two coils:
# nf2 should be nf1 + 0.3 < nf2 < nf1 + 2
# If not, recompute and check delta_p
if not nf1 + 0.3 < nf2 < nf1 + 2:
nf2 = clip(nf2, nf1 + 0.3, nf1 + 2)
g1a_max = lin2db(db2lin(nf2) / (db2lin(nf_min) - db2lin(nf1)))
delta_p = gain_max - g1a_max
g1a_min = gain_min - (gain_max-gain_min) - delta_p
if not 1 < delta_p < 11:
raise EquipmentConfigError(f'Computed \N{greek capital letter delta}P invalid \
\n 1st coil vs 2nd coil calculated DeltaP {delta_p:.2f} for \
\n amplifier {type_variety} is not valid: revise inputs \
\n calculated 1st coil NF = {nf1:.2f}, 2nd coil NF = {nf2:.2f}')
# Check calculated values for nf1 and nf2
calc_nf_min = lin2db(db2lin(nf1) + db2lin(nf2)/db2lin(g1a_max))
if not isclose(nf_min, calc_nf_min, abs_tol=0.01):
raise EquipmentConfigError(f'nf_min does not match calc_nf_min, {nf_min} vs {calc_nf_min} for amp {type_variety}')
calc_nf_max = lin2db(db2lin(nf1) + db2lin(nf2)/db2lin(g1a_min))
if not isclose(nf_max, calc_nf_max, abs_tol=0.01):
raise EquipmentConfigError(f'nf_max does not match calc_nf_max, {nf_max} vs {calc_nf_max} for amp {type_variety}')
return nf1, nf2, delta_p
def edfa_nf(gain_target, variety_type, equipment):
amp_params = equipment['Edfa'][variety_type]
amp = Edfa(
uid = f'calc_NF',
params = amp_params.__dict__,
operational = {
'gain_target': gain_target,
'tilt_target': 0
}
)
amp.pin_db = 0
amp.nch = 88
return amp._calc_nf(True)
def trx_mode_params(equipment, trx_type_variety='', trx_mode='', error_message=False):
"""return the trx and SI parameters from eqpt_config for a given type_variety and mode (ie format)"""
@@ -19,39 +271,38 @@ def trx_mode_params(equipment, trx_type_variety='', trx_mode='', error_message=F
try:
trxs = equipment['Transceiver']
# if called from path_requests_run.py, trx_mode is filled with None when not specified by user
# if called from transmission_main.py, trx_mode is ''
#if called from path_requests_run.py, trx_mode is filled with None when not specified by user
#if called from transmission_main.py, trx_mode is ''
if trx_mode is not None:
mode_params = next(mode for trx in trxs
if trx == trx_type_variety
for mode in trxs[trx].mode
if mode['format'] == trx_mode)
mode_params = next(mode for trx in trxs \
if trx == trx_type_variety \
for mode in trxs[trx].mode \
if mode['format'] == trx_mode)
trx_params = {**mode_params}
# sanity check: spacing baudrate must be smaller than min spacing
if trx_params['baud_rate'] > trx_params['min_spacing']:
raise EquipmentConfigError(f'Inconsistency in equipment library:\n Transpoder "{trx_type_variety}" mode "{trx_params["format"]}" ' +
f'has baud rate {trx_params["baud_rate"]*1e-9} GHz greater than min_spacing {trx_params["min_spacing"]*1e-9}.')
if trx_params['baud_rate'] > trx_params['min_spacing'] :
raise EquipmentConfigError(f'Inconsistency in equipment library:\n Transpoder "{trx_type_variety}" mode "{trx_params["format"]}" '+\
f'has baud rate: {trx_params["baud_rate"]*1e-9} GHz greater than min_spacing {trx_params["min_spacing"]*1e-9}.')
else:
mode_params = {"format": "undetermined",
"baud_rate": None,
"OSNR": None,
"penalties": None,
"bit_rate": None,
"roll_off": None,
"tx_osnr": None,
"min_spacing": None,
"cost": None}
"tx_osnr":None,
"min_spacing":None,
"cost":None}
trx_params = {**mode_params}
trx_params['f_min'] = equipment['Transceiver'][trx_type_variety].frequency['min']
trx_params['f_max'] = equipment['Transceiver'][trx_type_variety].frequency['max']
# TODO: novel automatic feature maybe unwanted if spacing is specified
# trx_params['spacing'] = _automatic_spacing(trx_params['baud_rate'])
# trx_params['spacing'] = automatic_spacing(trx_params['baud_rate'])
# temp = trx_params['spacing']
# print(f'spacing {temp}')
except StopIteration:
except StopIteration :
if error_message:
raise EquipmentConfigError(f'Could not find transponder "{trx_type_variety}" with mode "{trx_mode}" in equipment library')
raise EquipmentConfigError(f'Computation stoped: could not find tsp : {trx_type_variety} with mode: {trx_mode} in eqpt library')
else:
# default transponder charcteristics
# mainly used with transmission_main_example.py
@@ -60,7 +311,6 @@ def trx_mode_params(equipment, trx_type_variety='', trx_mode='', error_message=F
trx_params['baud_rate'] = default_si_data.baud_rate
trx_params['spacing'] = default_si_data.spacing
trx_params['OSNR'] = None
trx_params['penalties'] = {}
trx_params['bit_rate'] = None
trx_params['cost'] = None
trx_params['roll_off'] = default_si_data.roll_off
@@ -70,6 +320,82 @@ def trx_mode_params(equipment, trx_type_variety='', trx_mode='', error_message=F
trx_params['nb_channel'] = nch
print(f'There are {nch} channels propagating')
trx_params['power'] = db2lin(default_si_data.power_dbm) * 1e-3
trx_params['power'] = db2lin(default_si_data.power_dbm)*1e-3
return trx_params
def automatic_spacing(baud_rate):
"""return the min possible channel spacing for a given baud rate"""
# TODO : this should parametrized in a cfg file
# list of possible tuples [(max_baud_rate, spacing_for_this_baud_rate)]
spacing_list = [(33e9, 37.5e9), (38e9, 50e9), (50e9, 62.5e9), (67e9, 75e9), (92e9, 100e9)]
return min((s[1] for s in spacing_list if s[0] > baud_rate), default=baud_rate*1.2)
def automatic_nch(f_min, f_max, spacing):
return int((f_max - f_min)//spacing)
def automatic_fmax(f_min, spacing, nch):
return f_min + spacing * nch
def load_equipment(filename):
json_data = load_json(filename)
return equipment_from_json(json_data, filename)
def update_trx_osnr(equipment):
"""add sys_margins to all Transceivers OSNR values"""
for trx in equipment['Transceiver'].values():
for m in trx.mode:
m['OSNR'] = m['OSNR'] + equipment['SI']['default'].sys_margins
return equipment
def update_dual_stage(equipment):
edfa_dict = equipment['Edfa']
for edfa in edfa_dict.values():
if edfa.type_def == 'dual_stage':
edfa_preamp = edfa_dict[edfa.dual_stage_model.preamp_variety]
edfa_booster = edfa_dict[edfa.dual_stage_model.booster_variety]
for key, value in edfa_preamp.__dict__.items():
attr_k = 'preamp_' + key
setattr(edfa, attr_k, value)
for key, value in edfa_booster.__dict__.items():
attr_k = 'booster_' + key
setattr(edfa, attr_k, value)
edfa.p_max = edfa_booster.p_max
edfa.gain_flatmax = edfa_booster.gain_flatmax + edfa_preamp.gain_flatmax
if edfa.gain_min < edfa_preamp.gain_min:
raise EquipmentConfigError(f'Dual stage {edfa.type_variety} min gain is lower than its preamp min gain')
return equipment
def roadm_restrictions_sanity_check(equipment):
""" verifies that booster and preamp restrictions specified in roadm equipment are listed
in the edfa.
"""
restrictions = equipment['Roadm']['default'].restrictions['booster_variety_list'] + \
equipment['Roadm']['default'].restrictions['preamp_variety_list']
for amp_name in restrictions:
if amp_name not in equipment['Edfa']:
raise EquipmentConfigError(f'ROADM restriction {amp_name} does not refer to a defined EDFA name')
def equipment_from_json(json_data, filename):
"""build global dictionnary eqpt_library that stores all eqpt characteristics:
edfa type type_variety, fiber type_variety
from the eqpt_config.json (filename parameter)
also read advanced_config_from_json file parameters for edfa if they are available:
typically nf_ripple, dfg gain ripple, dgt and nf polynomial nf_fit_coeff
if advanced_config_from_json file parameter is not present: use nf_model:
requires nf_min and nf_max values boundaries of the edfa gain range
"""
equipment = {}
for key, entries in json_data.items():
equipment[key] = {}
typ = globals()[key]
for entry in entries:
subkey = entry.get('type_variety', 'default')
if key == 'Edfa':
equipment[key][subkey] = Amp.from_json(filename, **entry)
else:
equipment[key][subkey] = typ(**entry)
equipment = update_trx_osnr(equipment)
equipment = update_dual_stage(equipment)
roadm_restrictions_sanity_check(equipment)
return equipment

View File

@@ -1,37 +1,29 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
'''
gnpy.core.exceptions
====================
Exceptions thrown by other gnpy modules
"""
'''
class ConfigurationError(Exception):
"""User-provided configuration contains an error"""
'''User-provided configuration contains an error'''
class EquipmentConfigError(ConfigurationError):
"""Incomplete or wrong configuration within the equipment library"""
'''Incomplete or wrong configuration within the equipment library'''
class NetworkTopologyError(ConfigurationError):
"""Topology of user-provided network is wrong"""
'''Topology of user-provided network is wrong'''
class ServiceError(Exception):
"""Service of user-provided request is wrong"""
'''Service of user-provided request is wrong'''
class DisjunctionError(ServiceError):
"""Disjunction of user-provided request can not be satisfied"""
'''Disjunction of user-provided request can not be satisfied'''
class SpectrumError(Exception):
"""Spectrum errors of the program"""
'''Spectrum errors of the program'''
class ParametersError(ConfigurationError):
"""Incomplete or wrong configurations within parameters json"""

10
gnpy/core/execute.py Normal file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
gnpy.core.execute
=================
This module contains functions for executing the propogation of
spectral information on a `gnpy` network.
'''

View File

@@ -1,264 +1,72 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
'''
gnpy.core.info
==============
This module contains classes for modelling :class:`SpectralInformation`.
"""
'''
from __future__ import annotations
from collections import namedtuple
from collections.abc import Iterable
from typing import Union
from numpy import argsort, mean, array, append, ones, ceil, any, zeros, outer, full, ndarray, asarray
from gnpy.core.utils import automatic_nch, lin2db, db2lin
from gnpy.core.exceptions import SpectrumError
DEFAULT_SLOT_WIDTH_STEP = 12.5e9 # Hz
"""Channels with unspecified slot width will have their slot width evaluated as the baud rate rounded up to the minimum
multiple of the DEFAULT_SLOT_WIDTH_STEP (the baud rate is extended including the roll off in this evaluation)"""
from numpy import array
from gnpy.core.utils import lin2db, db2lin
from json import loads
from gnpy.core.utils import load_json
from gnpy.core.equipment import automatic_nch, automatic_spacing
class Power(namedtuple('Power', 'signal nli ase')):
"""carriers power in W"""
class Channel(namedtuple('Channel',
'channel_number frequency baud_rate slot_width roll_off power chromatic_dispersion pmd pdl')):
""" Class containing the parameters of a WDM signal.
:param channel_number: channel number in the WDM grid
:param frequency: central frequency of the signal (Hz)
:param baud_rate: the symbol rate of the signal (Baud)
:param slot_width: the slot width (Hz)
:param roll_off: the roll off of the signal. It is a pure number between 0 and 1
:param power (gnpy.core.info.Power): power of signal, ASE noise and NLI (W)
:param chromatic_dispersion: chromatic dispersion (s/m)
:param pmd: polarization mode dispersion (s)
:param pdl: polarization dependent loss (dB)
"""
class Channel(namedtuple('Channel', 'channel_number frequency baud_rate roll_off power')):
pass
class Pref(namedtuple('Pref', 'p_span0, p_spani, neq_ch ')):
"""noiseless reference power in dBm:
"""noiseless reference power in dBm:
p_span0: inital target carrier power
p_spani: carrier power after element i
neq_ch: equivalent channel count in dB"""
class SpectralInformation(object):
""" Class containing the parameters of the entire WDM comb."""
class SpectralInformation(namedtuple('SpectralInformation', 'pref carriers')):
def __init__(self, frequency: array, baud_rate: array, slot_width: array, signal: array, nli: array, ase: array,
roll_off: array, chromatic_dispersion: array, pmd: array, pdl: array):
indices = argsort(frequency)
self._frequency = frequency[indices]
self._df = outer(ones(frequency.shape), frequency) - outer(frequency, ones(frequency.shape))
self._number_of_channels = len(self._frequency)
self._channel_number = [*range(1, self._number_of_channels + 1)]
self._slot_width = slot_width[indices]
self._baud_rate = baud_rate[indices]
overlap = self._frequency[:-1] + self._slot_width[:-1] / 2 > self._frequency[1:] - self._slot_width[1:] / 2
if any(overlap):
overlap = [pair for pair in zip(overlap * self._channel_number[:-1], overlap * self._channel_number[1:])
if pair != (0, 0)]
raise SpectrumError(f'Spectrum required slot widths larger than the frequency spectral distances '
f'between channels: {overlap}.')
exceed = self._baud_rate > self._slot_width
if any(exceed):
raise SpectrumError(f'Spectrum baud rate, including the roll off, larger than the slot width for channels: '
f'{[ch for ch in exceed * self._channel_number if ch]}.')
self._signal = signal[indices]
self._nli = nli[indices]
self._ase = ase[indices]
self._roll_off = roll_off[indices]
self._chromatic_dispersion = chromatic_dispersion[indices]
self._pmd = pmd[indices]
self._pdl = pdl[indices]
pref = lin2db(mean(signal) * 1e3)
self._pref = Pref(pref, pref, lin2db(self._number_of_channels))
@property
def pref(self):
"""Instance of gnpy.info.Pref"""
return self._pref
@pref.setter
def pref(self, pref: Pref):
self._pref = pref
@property
def frequency(self):
return self._frequency
@property
def df(self):
"""Matrix of relative frequency distances between all channels. Positive elements in the upper right side."""
return self._df
@property
def slot_width(self):
return self._slot_width
@property
def baud_rate(self):
return self._baud_rate
@property
def number_of_channels(self):
return self._number_of_channels
@property
def powers(self):
powers = zip(self.signal, self.nli, self.ase)
return [Power(*p) for p in powers]
@property
def signal(self):
return self._signal
@signal.setter
def signal(self, signal):
self._signal = signal
@property
def nli(self):
return self._nli
@nli.setter
def nli(self, nli):
self._nli = nli
@property
def ase(self):
return self._ase
@ase.setter
def ase(self, ase):
self._ase = ase
@property
def roll_off(self):
return self._roll_off
@property
def chromatic_dispersion(self):
return self._chromatic_dispersion
@chromatic_dispersion.setter
def chromatic_dispersion(self, chromatic_dispersion):
self._chromatic_dispersion = chromatic_dispersion
@property
def pmd(self):
return self._pmd
@pmd.setter
def pmd(self, pmd):
self._pmd = pmd
@property
def pdl(self):
return self._pdl
@pdl.setter
def pdl(self, pdl):
self._pdl = pdl
@property
def channel_number(self):
return self._channel_number
@property
def carriers(self):
entries = zip(self.channel_number, self.frequency, self.baud_rate, self.slot_width,
self.roll_off, self.powers, self.chromatic_dispersion, self.pmd, self.pdl)
return [Channel(*entry) for entry in entries]
def apply_attenuation_lin(self, attenuation_lin):
self.signal *= attenuation_lin
self.nli *= attenuation_lin
self.ase *= attenuation_lin
def apply_attenuation_db(self, attenuation_db):
attenuation_lin = 1 / db2lin(attenuation_db)
self.apply_attenuation_lin(attenuation_lin)
def apply_gain_lin(self, gain_lin):
self.signal *= gain_lin
self.nli *= gain_lin
self.ase *= gain_lin
def apply_gain_db(self, gain_db):
gain_lin = db2lin(gain_db)
self.apply_gain_lin(gain_lin)
def __add__(self, other: SpectralInformation):
try:
return SpectralInformation(frequency=append(self.frequency, other.frequency),
slot_width=append(self.slot_width, other.slot_width),
signal=append(self.signal, other.signal), nli=append(self.nli, other.nli),
ase=append(self.ase, other.ase),
baud_rate=append(self.baud_rate, other.baud_rate),
roll_off=append(self.roll_off, other.roll_off),
chromatic_dispersion=append(self.chromatic_dispersion,
other.chromatic_dispersion),
pmd=append(self.pmd, other.pmd),
pdl=append(self.pdl, other.pdl))
except SpectrumError:
raise SpectrumError('Spectra cannot be summed: channels overlapping.')
def _replace(self, carriers, pref):
self.chromatic_dispersion = array([c.chromatic_dispersion for c in carriers])
self.pmd = array([c.pmd for c in carriers])
self.pdl = array([c.pdl for c in carriers])
self.signal = array([c.power.signal for c in carriers])
self.nli = array([c.power.nli for c in carriers])
self.ase = array([c.power.ase for c in carriers])
self.pref = pref
return self
def create_arbitrary_spectral_information(frequency: Union[ndarray, Iterable, int, float],
signal: Union[int, float, ndarray, Iterable],
baud_rate: Union[int, float, ndarray, Iterable],
slot_width: Union[int, float, ndarray, Iterable] = None,
roll_off: Union[int, float, ndarray, Iterable] = 0.,
chromatic_dispersion: Union[int, float, ndarray, Iterable] = 0.,
pmd: Union[int, float, ndarray, Iterable] = 0.,
pdl: Union[int, float, ndarray, Iterable] = 0.):
"""This is just a wrapper around the SpectralInformation.__init__() that simplifies the creation of
a non-uniform spectral information with NLI and ASE powers set to zero."""
frequency = asarray(frequency)
number_of_channels = frequency.size
try:
signal = full(number_of_channels, signal)
baud_rate = full(number_of_channels, baud_rate)
roll_off = full(number_of_channels, roll_off)
slot_width = full(number_of_channels, slot_width) if slot_width is not None else \
ceil((1 + roll_off) * baud_rate / DEFAULT_SLOT_WIDTH_STEP) * DEFAULT_SLOT_WIDTH_STEP
chromatic_dispersion = full(number_of_channels, chromatic_dispersion)
pmd = full(number_of_channels, pmd)
pdl = full(number_of_channels, pdl)
nli = zeros(number_of_channels)
ase = zeros(number_of_channels)
return SpectralInformation(frequency=frequency, slot_width=slot_width,
signal=signal, nli=nli, ase=ase,
baud_rate=baud_rate, roll_off=roll_off,
chromatic_dispersion=chromatic_dispersion,
pmd=pmd, pdl=pdl)
except ValueError as e:
if 'could not broadcast' in str(e):
raise SpectrumError('Dimension mismatch in input fields.')
else:
raise
def __new__(cls, pref, carriers):
return super().__new__(cls, pref, carriers)
def create_input_spectral_information(f_min, f_max, roll_off, baud_rate, power, spacing):
""" Creates a fixed slot width spectral information with flat power """
# pref in dB : convert power lin into power in dB
pref = lin2db(power * 1e3)
nb_channel = automatic_nch(f_min, f_max, spacing)
frequency = [(f_min + spacing * i) for i in range(1, nb_channel + 1)]
return create_arbitrary_spectral_information(frequency, slot_width=spacing, signal=power, baud_rate=baud_rate,
roll_off=roll_off)
si = SpectralInformation(
pref=Pref(pref, pref, lin2db(nb_channel)),
carriers=[
Channel(f, (f_min+spacing*f),
baud_rate, roll_off, Power(power, 0, 0)) for f in range(1,nb_channel+1)
])
return si
if __name__ == '__main__':
pref = lin2db(power * 1e3)
si = SpectralInformation(
Pref(pref, pref),
Channel(1, 193.95e12, 32e9, 0.15, # 193.95 THz, 32 Gbaud
Power(1e-3, 1e-6, 1e-6)), # 1 mW, 1uW, 1uW
Channel(1, 195.95e12, 32e9, 0.15, # 195.95 THz, 32 Gbaud
Power(1.2e-3, 1e-6, 1e-6)), # 1.2 mW, 1uW, 1uW
)
si = SpectralInformation()
spacing = 0.05 # THz
si = si._replace(carriers=tuple(Channel(f+1, 191.3+spacing*(f+1), 32e9, 0.15, Power(1e-3, f, 1)) for f in range(96)))
print(f'si = {si}')
print(f'si = {si.carriers[0].power.nli}')
print(f'si = {si.carriers[20].power.nli}')
si2 = si._replace(carriers=tuple(c._replace(power = c.power._replace(nli = c.power.nli * 1e5))
for c in si.carriers))
print(f'si2 = {si2}')

View File

@@ -5,31 +5,92 @@
gnpy.core.network
=================
Working with networks which consist of network elements
This module contains functions for constructing networks of network elements.
'''
from operator import attrgetter
from gnpy.core import ansi_escapes, elements
from gnpy.core.convert import convert_file
from networkx import DiGraph
from numpy import arange
from scipy.interpolate import interp1d
from logging import getLogger
from os import path
from operator import itemgetter, attrgetter
from gnpy.core import elements
from gnpy.core.elements import Fiber, Edfa, Transceiver, Roadm, Fused, RamanFiber
from gnpy.core.equipment import edfa_nf
from gnpy.core.exceptions import ConfigurationError, NetworkTopologyError
from gnpy.core.utils import round2float, convert_length
from gnpy.core.units import UNITS
from gnpy.core.utils import (load_json, save_json, round2float, db2lin,
merge_amplifier_restrictions)
from gnpy.core.science_utils import SimParams
from collections import namedtuple
logger = getLogger(__name__)
def edfa_nf(gain_target, variety_type, equipment):
amp_params = equipment['Edfa'][variety_type]
amp = elements.Edfa(
uid='calc_NF',
params=amp_params.__dict__,
operational={
'gain_target': gain_target,
'tilt_target': 0
def load_network(filename, equipment, name_matching = False):
json_filename = ''
if filename.suffix.lower() == '.xls':
logger.info('Automatically generating topology JSON file')
json_filename = convert_file(filename, name_matching)
elif filename.suffix.lower() == '.json':
json_filename = filename
else:
raise ValueError(f'unsuported topology filename extension {filename.suffix.lower()}')
json_data = load_json(json_filename)
return network_from_json(json_data, equipment)
def save_network(filename, network):
filename_output = path.splitext(filename)[0] + '_auto_design.json'
json_data = network_to_json(network)
save_json(json_data, filename_output)
def network_from_json(json_data, equipment):
# NOTE|dutc: we could use the following, but it would tie our data format
# too closely to the graph library
# from networkx import node_link_graph
g = DiGraph()
for el_config in json_data['elements']:
typ = el_config.pop('type')
variety = el_config.pop('type_variety', 'default')
if typ in equipment and variety in equipment[typ]:
extra_params = equipment[typ][variety]
temp = el_config.setdefault('params', {})
temp = merge_amplifier_restrictions(temp, extra_params.__dict__)
el_config['params'] = temp
elif typ in ['Edfa', 'Fiber']: # catch it now because the code will crash later!
raise ConfigurationError(f'The {typ} of variety type {variety} was not recognized:'
'\nplease check it is properly defined in the eqpt_config json file')
cls = getattr(elements, typ)
el = cls(**el_config)
g.add_node(el)
nodes = {k.uid: k for k in g.nodes()}
for cx in json_data['connections']:
from_node, to_node = cx['from_node'], cx['to_node']
try:
if isinstance(nodes[from_node], Fiber):
edge_length = nodes[from_node].params.length
else:
edge_length = 0.01
g.add_edge(nodes[from_node], nodes[to_node], weight = edge_length)
except KeyError:
raise NetworkTopologyError(f'can not find {from_node} or {to_node} defined in {cx}')
return g
def network_to_json(network):
data = {
'elements': [n.to_json for n in network]
}
)
amp.pin_db = 0
amp.nch = 88
amp.slot_width = 50e9
return amp._calc_nf(True)
connections = {
'connections': [{"from_node": n.uid,
"to_node": next_n.uid}
for n in network
for next_n in network.successors(n) if next_n is not None]
}
data.update(connections)
return data
def select_edfa(raman_allowed, gain_target, power_target, equipment, uid, restrictions=None):
"""amplifer selection algorithm
@@ -42,7 +103,7 @@ def select_edfa(raman_allowed, gain_target, power_target, equipment, uid, restri
# because main use case is to have specific radm amp which are not allowed for ILA
# with the auto design
edfa_dict = {name: amp for (name, amp) in equipment['Edfa'].items()
if restrictions is None or name in restrictions}
if restrictions is None or name in restrictions}
pin = power_target - gain_target
@@ -53,49 +114,51 @@ def select_edfa(raman_allowed, gain_target, power_target, equipment, uid, restri
# extended gain max allowance TARGET_EXTENDED_GAIN is coming from eqpt_config.json
# power attribut include power AND gain limitations
edfa_list = [Edfa_list(
variety=edfa_variety,
power=min(
pin
+ edfa.gain_flatmax
+ TARGET_EXTENDED_GAIN,
edfa.p_max
)
- power_target,
gain_min=gain_target + 3
- edfa.gain_min,
nf=edfa_nf(gain_target, edfa_variety, equipment))
for edfa_variety, edfa in edfa_dict.items()
if ((edfa.allowed_for_design or restrictions is not None) and not edfa.raman)]
variety=edfa_variety,
power=min(
pin
+edfa.gain_flatmax
+TARGET_EXTENDED_GAIN,
edfa.p_max
)
-power_target,
gain_min=
gain_target+3
-edfa.gain_min,
nf=edfa_nf(gain_target, edfa_variety, equipment)) \
for edfa_variety, edfa in edfa_dict.items()
if ((edfa.allowed_for_design or restrictions is not None) and not edfa.raman)]
# consider a Raman list because of different gain_min requirement:
# do not allow extended gain min for Raman
#consider a Raman list because of different gain_min requirement:
#do not allow extended gain min for Raman
raman_list = [Edfa_list(
variety=edfa_variety,
power=min(
pin
+ edfa.gain_flatmax
+ TARGET_EXTENDED_GAIN,
edfa.p_max
)
- power_target,
gain_min=gain_target
- edfa.gain_min,
nf=edfa_nf(gain_target, edfa_variety, equipment))
for edfa_variety, edfa in edfa_dict.items()
if (edfa.allowed_for_design and edfa.raman)] \
if raman_allowed else []
variety=edfa_variety,
power=min(
pin
+edfa.gain_flatmax
+TARGET_EXTENDED_GAIN,
edfa.p_max
)
-power_target,
gain_min=
gain_target
-edfa.gain_min,
nf=edfa_nf(gain_target, edfa_variety, equipment))
for edfa_variety, edfa in edfa_dict.items()
if (edfa.allowed_for_design and edfa.raman)] \
if raman_allowed else []
# merge raman and edfa lists
#merge raman and edfa lists
amp_list = edfa_list + raman_list
# filter on min gain limitation:
acceptable_gain_min_list = [x for x in amp_list if x.gain_min > 0]
#filter on min gain limitation:
acceptable_gain_min_list = [x for x in amp_list if x.gain_min>0]
if len(acceptable_gain_min_list) < 1:
# do not take this empty list into account for the rest of the code
# but issue a warning to the user and do not consider Raman
# Raman below min gain should not be allowed because i is meant to be a design requirement
# and raman padding at the amplifier input is impossible!
#do not take this empty list into account for the rest of the code
#but issue a warning to the user and do not consider Raman
#Raman below min gain should not be allowed because i is meant to be a design requirement
#and raman padding at the amplifier input is impossible!
if len(edfa_list) < 1:
raise ConfigurationError(f'auto_design could not find any amplifier \
@@ -104,45 +167,48 @@ def select_edfa(raman_allowed, gain_target, power_target, equipment, uid, restri
else:
# TODO: convert to logging
print(
f'{ansi_escapes.red}WARNING:{ansi_escapes.reset} target gain in node {uid} is below all available amplifiers min gain: \
amplifier input padding will be assumed, consider increase span fiber padding instead'
)
f'\x1b[1;31;40m'\
+ f'WARNING: target gain in node {uid} is below all available amplifiers min gain: \
amplifier input padding will be assumed, consider increase span fiber padding instead'\
+ '\x1b[0m'
)
acceptable_gain_min_list = edfa_list
# filter on gain+power limitation:
# this list checks both the gain and the power requirement
# because of the way .power is calculated in the list
acceptable_power_list = [x for x in acceptable_gain_min_list if x.power > 0]
#filter on gain+power limitation:
#this list checks both the gain and the power requirement
#because of the way .power is calculated in the list
acceptable_power_list = [x for x in acceptable_gain_min_list if x.power>0]
if len(acceptable_power_list) < 1:
# no amplifier satisfies the required power, so pick the highest power(s):
#no amplifier satisfies the required power, so pick the highest power(s):
power_max = max(acceptable_gain_min_list, key=attrgetter('power')).power
# check and pick if other amplifiers may have a similar gain/power
# allow a 0.3dB power range
# this allows to chose an amplifier with a better NF subsequentely
#check and pick if other amplifiers may have a similar gain/power
#allow a 0.3dB power range
#this allows to chose an amplifier with a better NF subsequentely
acceptable_power_list = [x for x in acceptable_gain_min_list
if x.power - power_max > -0.3]
if x.power-power_max>-0.3]
# gain and power requirements are resolved,
# =>chose the amp with the best NF among the acceptable ones:
selected_edfa = min(acceptable_power_list, key=attrgetter('nf')) # filter on NF
# check what are the gain and power limitations of this amp
power_reduction = round(min(selected_edfa.power, 0), 2)
selected_edfa = min(acceptable_power_list, key=attrgetter('nf')) #filter on NF
#check what are the gain and power limitations of this amp
power_reduction = round(min(selected_edfa.power, 0),2)
if power_reduction < -0.5:
print(
f'{ansi_escapes.red}WARNING:{ansi_escapes.reset} target gain and power in node {uid}\n \
f'\x1b[1;31;40m'\
+ f'WARNING: target gain and power in node {uid}\n \
is beyond all available amplifiers capabilities and/or extended_gain_range:\n\
a power reduction of {power_reduction} is applied\n'
)
a power reduction of {power_reduction} is applied\n'\
+ '\x1b[0m'
)
return selected_edfa.variety, power_reduction
def target_power(network, node, equipment): # get_fiber_dp
if isinstance(node, elements.Roadm):
return 0
def target_power(network, node, equipment): #get_fiber_dp
SPAN_LOSS_REF = 20
POWER_SLOPE = 0.3
power_mode = equipment['Span']['default'].power_mode
dp_range = list(equipment['Span']['default'].delta_power_range_db)
node_loss = span_loss(network, node)
@@ -150,55 +216,61 @@ def target_power(network, node, equipment): # get_fiber_dp
dp = round2float((node_loss - SPAN_LOSS_REF) * POWER_SLOPE, dp_range[2])
dp = max(dp_range[0], dp)
dp = min(dp_range[1], dp)
except IndexError:
except KeyError:
raise ConfigurationError(f'invalid delta_power_range_db definition in eqpt_config[Span]'
f'delta_power_range_db: [lower_bound, upper_bound, step]')
f'delta_power_range_db: [lower_bound, upper_bound, step]')
if isinstance(node, Roadm):
dp = 0
return dp
_fiber_fused_types = (elements.Fused, elements.Fiber)
def prev_node_generator(network, node):
"""fused spans interest:
iterate over all predecessors while they are either Fused or Fibers succeeded by Fused"""
iterate over all predecessors while they are Fused or Fiber type"""
try:
prev_node = next(network.predecessors(node))
prev_node = next(n for n in network.predecessors(node))
except StopIteration:
if isinstance(node, elements.Transceiver):
return
raise NetworkTopologyError(f'Node {node.uid} is not properly connected, please check network topology')
if ((isinstance(prev_node, elements.Fused) and isinstance(node, _fiber_fused_types)) or
(isinstance(prev_node, _fiber_fused_types) and isinstance(node, elements.Fused))):
# yield and re-iterate
if isinstance(prev_node, Fused) or isinstance(node, Fused) and not isinstance(prev_node, Roadm):
yield prev_node
yield from prev_node_generator(network, prev_node)
else:
StopIteration
def next_node_generator(network, node):
"""fused spans interest:
iterate over all predecessors while they are either Fused or Fibers preceded by Fused"""
iterate over all successors while they are Fused or Fiber type"""
try:
next_node = next(network.successors(node))
next_node = next(n for n in network.successors(node))
except StopIteration:
if isinstance(node, elements.Transceiver):
return
raise NetworkTopologyError(f'Node {node.uid} is not properly connected, please check network topology')
if ((isinstance(next_node, elements.Fused) and isinstance(node, _fiber_fused_types)) or
(isinstance(next_node, _fiber_fused_types) and isinstance(node, elements.Fused))):
raise NetworkTopologyError('Node {node.uid} is not properly connected, please check network topology')
# yield and re-iterate
if isinstance(next_node, Fused) or isinstance(node, Fused) and not isinstance(next_node, Roadm):
yield next_node
yield from next_node_generator(network, next_node)
else:
StopIteration
def span_loss(network, node):
"""Total loss of a span (Fiber and Fused nodes) which contains the given node"""
"""Fused span interest:
return the total span loss of all the fibers spliced by a Fused node"""
loss = node.loss if node.passive else 0
loss += sum(n.loss for n in prev_node_generator(network, node))
loss += sum(n.loss for n in next_node_generator(network, node))
try:
prev_node = next(n for n in network.predecessors(node))
if isinstance(prev_node, Fused):
loss += sum(n.loss for n in prev_node_generator(network, node))
except StopIteration:
pass
try:
next_node = next(n for n in network.successors(node))
if isinstance(next_node, Fused):
loss += sum(n.loss for n in next_node_generator(network, node))
except StopIteration:
pass
return loss
def find_first_node(network, node):
"""Fused node interest:
returns the 1st node at the origin of a succession of fused nodes
@@ -208,7 +280,6 @@ def find_first_node(network, node):
pass
return this_node
def find_last_node(network, node):
"""Fused node interest:
returns the last node in a succession of fused nodes
@@ -218,228 +289,170 @@ def find_last_node(network, node):
pass
return this_node
def set_amplifier_voa(amp, power_target, power_mode):
VOA_MARGIN = 1 # do not maximize the VOA optimization
VOA_MARGIN = 1 #do not maximize the VOA optimization
if amp.out_voa is None:
if power_mode and amp.params.out_voa_auto:
voa = min(amp.params.p_max - power_target,
amp.params.gain_flatmax - amp.effective_gain)
voa = max(round2float(voa, 0.5) - VOA_MARGIN, 0)
if power_mode:
gain_target = amp.effective_gain
voa = min(amp.params.p_max-power_target,
amp.params.gain_flatmax-amp.effective_gain)
voa = max(round2float(max(voa, 0), 0.5) - VOA_MARGIN, 0) if amp.params.out_voa_auto else 0
amp.delta_p = amp.delta_p + voa
amp.effective_gain = amp.effective_gain + voa
else:
voa = 0 # no output voa optimization in gain mode
voa = 0 # no output voa optimization in gain mode
amp.out_voa = voa
def set_egress_amplifier(network, this_node, equipment, pref_ch_db, pref_total_db):
""" this node can be a transceiver or a ROADM (same function called in both cases)
"""
def set_egress_amplifier(network, roadm, equipment, pref_total_db):
power_mode = equipment['Span']['default'].power_mode
next_oms = (n for n in network.successors(this_node) if not isinstance(n, elements.Transceiver))
this_node_degree = {k: v for k, v in this_node.per_degree_pch_out_db.items()} if hasattr(this_node, 'per_degree_pch_out_db') else {}
next_oms = (n for n in network.successors(roadm) if not isinstance(n, Transceiver))
for oms in next_oms:
# go through all the OMS departing from the ROADM
prev_node = this_node
node = oms
# if isinstance(next_node, elements.Fused): #support ROADM wo egress amp for metro applications
#go through all the OMS departing from the Roadm
node = roadm
prev_node = roadm
next_node = oms
# if isinstance(next_node, Fused): #support ROADM wo egress amp for metro applications
# node = find_last_node(next_node)
# next_node = next(n for n in network.successors(node))
# next_node = find_last_node(next_node)
if node.uid not in this_node_degree:
# if no target power is defined on this degree or no per degree target power is given use the global one
# if target_pch_out_db is not an attribute, then the element must be a transceiver
this_node_degree[node.uid] = getattr(this_node.params, 'target_pch_out_db', 0)
# use the target power on this degree
prev_dp = this_node_degree[node.uid] - pref_ch_db
if node.per_degree_target_pch_out_db:
# find the target power on this degree
try:
prev_dp = next(el["target_pch_out_db"] for el in \
node.per_degree_target_pch_out_db if el["to_node"]==next_node.uid)
except StopIteration:
# if no target power is defined on this degree use the global one
prev_dp = getattr(node.params, 'target_pch_out_db', 0)
else:
# if no per degree target power is given use the global one
prev_dp = getattr(node.params, 'target_pch_out_db', 0)
dp = prev_dp
prev_voa = 0
voa = 0
visited_nodes = []
while not (isinstance(node, elements.Roadm) or isinstance(node, elements.Transceiver)):
# go through all nodes in the OMS (loop until next Roadm instance)
try:
next_node = next(network.successors(node))
except StopIteration:
raise NetworkTopologyError(f'{type(node).__name__} {node.uid} is not properly connected, please check network topology')
visited_nodes.append(node)
if next_node in visited_nodes:
raise NetworkTopologyError(f'Loop detected for {type(node).__name__} {node.uid}, please check network topology')
if isinstance(node, elements.Edfa):
while True:
#go through all nodes in the OMS (loop until next Roadm instance)
if isinstance(node, Edfa):
node_loss = span_loss(network, prev_node)
voa = node.out_voa if node.out_voa else 0
if node.delta_p is None:
dp = target_power(network, next_node, equipment) + voa
dp = target_power(network, next_node, equipment)
else:
dp = node.delta_p
gain_from_dp = node_loss + dp - prev_dp + prev_voa
if node.effective_gain is None or power_mode:
gain_target = node_loss + dp - prev_dp + prev_voa
else: # gain mode with effective_gain
gain_target = gain_from_dp
else: #gain mode with effective_gain
gain_target = node.effective_gain
dp = prev_dp - node_loss - prev_voa + gain_target
dp = prev_dp - node_loss + gain_target
power_target = pref_total_db + dp
power_target = pref_total_db + dp
if isinstance(prev_node, elements.Fiber):
raman_allowed = False
if isinstance(prev_node, Fiber):
max_fiber_lineic_loss_for_raman = \
equipment['Span']['default'].max_fiber_lineic_loss_for_raman * 1e-3 # dB/m
equipment['Span']['default'].max_fiber_lineic_loss_for_raman
raman_allowed = prev_node.params.loss_coef < max_fiber_lineic_loss_for_raman
else:
raman_allowed = False
if node.params.type_variety == '':
if node.variety_list and isinstance(node.variety_list, list):
restrictions = node.variety_list
elif isinstance(prev_node, elements.Roadm) and prev_node.restrictions['booster_variety_list']:
# implementation of restrictions on roadm boosters
# implementation of restrictions on roadm boosters
if isinstance(prev_node,Roadm):
if prev_node.restrictions['booster_variety_list']:
restrictions = prev_node.restrictions['booster_variety_list']
elif isinstance(next_node, elements.Roadm) and next_node.restrictions['preamp_variety_list']:
# implementation of restrictions on roadm preamp
else:
restrictions = None
elif isinstance(next_node,Roadm):
# implementation of restrictions on roadm preamp
if next_node.restrictions['preamp_variety_list']:
restrictions = next_node.restrictions['preamp_variety_list']
else:
restrictions = None
edfa_variety, power_reduction = select_edfa(raman_allowed, gain_target, power_target, equipment, node.uid, restrictions)
else:
restrictions = None
if node.params.type_variety == '':
edfa_variety, power_reduction = select_edfa(raman_allowed,
gain_target, power_target, equipment, node.uid, restrictions)
extra_params = equipment['Edfa'][edfa_variety]
node.params.update_params(extra_params.__dict__)
dp += power_reduction
gain_target += power_reduction
else:
if node.params.raman and not raman_allowed:
if isinstance(prev_node, elements.Fiber):
print(f'{ansi_escapes.red}WARNING{ansi_escapes.reset}: raman is used in node {node.uid}\n '
'but fiber lineic loss is above threshold\n')
else:
print(f'{ansi_escapes.red}WARNING{ansi_escapes.reset}: raman is used in node {node.uid}\n '
'but previous node is not a fiber\n')
# if variety is imposed by user, and if the gain_target (computed or imposed) is also above
# variety max gain + extended range, then warn that gain > max_gain + extended range
if gain_target - equipment['Edfa'][node.params.type_variety].gain_flatmax - \
equipment['Span']['default'].target_extended_gain > 1e-2:
# 1e-2 to allow a small margin according to round2float min step
print(f'{ansi_escapes.red}WARNING{ansi_escapes.reset}: '
f'WARNING: effective gain in Node {node.uid} is above user '
f'specified amplifier {node.params.type_variety}\n'
f'max flat gain: {equipment["Edfa"][node.params.type_variety].gain_flatmax}dB ; '
f'required gain: {gain_target}dB. Please check amplifier type.')
elif node.params.raman and not raman_allowed:
print(
f'\x1b[1;31;40m'\
+ f'WARNING: raman is used in node {node.uid}\n \
but fiber lineic loss is above threshold\n'\
+ '\x1b[0m'
)
node.delta_p = dp if power_mode else None
node.effective_gain = gain_target
set_amplifier_voa(node, power_target, power_mode)
if isinstance(next_node, Roadm) or isinstance(next_node, Transceiver):
break
prev_dp = dp
prev_voa = voa
prev_node = node
node = next_node
# print(f'{node.uid}')
if isinstance(this_node, elements.Roadm):
this_node.per_degree_pch_out_db = {k: v for k, v in this_node_degree.items()}
next_node = next(n for n in network.successors(node))
def add_roadm_booster(network, roadm):
next_nodes = [n for n in network.successors(roadm)
if not (isinstance(n, elements.Transceiver) or isinstance(n, elements.Fused) or isinstance(n, elements.Edfa))]
# no amplification for fused spans or TRX
for next_node in next_nodes:
network.remove_edge(roadm, next_node)
amp = elements.Edfa(
uid=f'Edfa_booster_{roadm.uid}_to_{next_node.uid}',
params={},
metadata={
'location': {
'latitude': roadm.lat,
'longitude': roadm.lng,
'city': roadm.loc.city,
'region': roadm.loc.region,
}
},
operational={
'gain_target': None,
'tilt_target': 0,
})
def add_egress_amplifier(network, node):
next_nodes = [n for n in network.successors(node)
if not (isinstance(n, Transceiver) or isinstance(n, Fused) or isinstance(n, Edfa))]
#no amplification for fused spans or TRX
for i, next_node in enumerate(next_nodes):
network.remove_edge(node, next_node)
amp = Edfa(
uid = f'Edfa{i}_{node.uid}',
params = {},
metadata = {
'location': {
'latitude': (node.lat * 2 + next_node.lat * 2) / 4,
'longitude': (node.lng * 2 + next_node.lng * 2) / 4,
'city': node.loc.city,
'region': node.loc.region,
}
},
operational = {
'gain_target': None,
'tilt_target': 0,
})
network.add_node(amp)
network.add_edge(roadm, amp, weight=0.01)
network.add_edge(amp, next_node, weight=0.01)
def add_roadm_preamp(network, roadm):
prev_nodes = [n for n in network.predecessors(roadm)
if not (isinstance(n, elements.Transceiver) or isinstance(n, elements.Fused) or isinstance(n, elements.Edfa))]
# no amplification for fused spans or TRX
for prev_node in prev_nodes:
network.remove_edge(prev_node, roadm)
amp = elements.Edfa(
uid=f'Edfa_preamp_{roadm.uid}_from_{prev_node.uid}',
params={},
metadata={
'location': {
'latitude': roadm.lat,
'longitude': roadm.lng,
'city': roadm.loc.city,
'region': roadm.loc.region,
}
},
operational={
'gain_target': None,
'tilt_target': 0,
})
network.add_node(amp)
if isinstance(prev_node, elements.Fiber):
edgeweight = prev_node.params.length
if isinstance(node,Fiber):
edgeweight = node.params.length
else:
edgeweight = 0.01
network.add_edge(prev_node, amp, weight=edgeweight)
network.add_edge(amp, roadm, weight=0.01)
def add_inline_amplifier(network, fiber):
next_node = next(network.successors(fiber))
if isinstance(next_node, elements.Fiber) or isinstance(next_node, elements.RamanFiber):
# no amplification for fused spans or TRX
network.remove_edge(fiber, next_node)
amp = elements.Edfa(
uid=f'Edfa_{fiber.uid}',
params={},
metadata={
'location': {
'latitude': (fiber.lat + next_node.lat) / 2,
'longitude': (fiber.lng + next_node.lng) / 2,
'city': fiber.loc.city,
'region': fiber.loc.region,
}
},
operational={
'gain_target': None,
'tilt_target': 0,
})
network.add_node(amp)
network.add_edge(fiber, amp, weight=fiber.params.length)
network.add_edge(amp, next_node, weight=0.01)
network.add_edge(node, amp, weight = edgeweight)
network.add_edge(amp, next_node, weight = 0.01)
def calculate_new_length(fiber_length, bounds, target_length):
if fiber_length < bounds.stop:
return fiber_length, 1
n_spans2 = int(fiber_length // target_length)
n_spans1 = n_spans2 + 1
n_spans = int(fiber_length // target_length)
length1 = fiber_length / n_spans1
length2 = fiber_length / n_spans2
length1 = fiber_length / (n_spans+1)
delta1 = target_length-length1
result1 = (length1, n_spans+1)
if (bounds.start <= length1 <= bounds.stop) and not(bounds.start <= length2 <= bounds.stop):
return (length1, n_spans1)
elif (bounds.start <= length2 <= bounds.stop) and not(bounds.start <= length1 <= bounds.stop):
return (length2, n_spans2)
elif target_length - length1 < length2 - target_length:
return (length1, n_spans1)
length2 = fiber_length / n_spans
delta2 = length2-target_length
result2 = (length2, n_spans)
if (bounds.start<=length1<=bounds.stop) and not(bounds.start<=length2<=bounds.stop):
result = result1
elif (bounds.start<=length2<=bounds.stop) and not(bounds.start<=length1<=bounds.stop):
result = result2
else:
return (length2, n_spans2)
result = result1 if delta1 < delta2 else result2
return result
def split_fiber(network, fiber, bounds, target_length, equipment):
new_length, n_spans = calculate_new_length(fiber.params.length, bounds, target_length)
new_length, n_spans = calculate_new_length(fiber.length, bounds, target_length)
if n_spans == 1:
return
@@ -451,106 +464,108 @@ def split_fiber(network, fiber, bounds, target_length, equipment):
network.remove_node(fiber)
fiber.params.length = new_length
fiber_params = fiber.params._asdict()
fiber_params['length'] = new_length / UNITS[fiber.params.length_units]
fiber_params['con_in'] = fiber.con_in
fiber_params['con_out'] = fiber.con_out
xpos = [prev_node.lng + (next_node.lng - prev_node.lng) * (n + 0.5) / n_spans for n in range(n_spans)]
ypos = [prev_node.lat + (next_node.lat - prev_node.lat) * (n + 0.5) / n_spans for n in range(n_spans)]
f = interp1d([prev_node.lng, next_node.lng], [prev_node.lat, next_node.lat])
xpos = [prev_node.lng + (next_node.lng - prev_node.lng) * (n+1)/(n_spans+1) for n in range(n_spans)]
ypos = f(xpos)
for span, lng, lat in zip(range(n_spans), xpos, ypos):
new_span = elements.Fiber(uid=f'{fiber.uid}_({span+1}/{n_spans})',
type_variety=fiber.type_variety,
metadata={
'location': {
'latitude': lat,
'longitude': lng,
'city': fiber.loc.city,
'region': fiber.loc.region,
}
},
params=fiber.params.asdict())
if isinstance(prev_node, elements.Fiber):
new_span = Fiber(uid = f'{fiber.uid}_({span+1}/{n_spans})',
metadata = {
'location': {
'latitude': lat,
'longitude': lng,
'city': fiber.loc.city,
'region': fiber.loc.region,
}
},
params = fiber_params)
if isinstance(prev_node,Fiber):
edgeweight = prev_node.params.length
else:
edgeweight = 0.01
network.add_edge(prev_node, new_span, weight=edgeweight)
network.add_edge(prev_node, new_span, weight = edgeweight)
prev_node = new_span
if isinstance(prev_node, elements.Fiber):
if isinstance(prev_node,Fiber):
edgeweight = prev_node.params.length
else:
edgeweight = 0.01
network.add_edge(prev_node, next_node, weight=edgeweight)
edgeweight = 0.01
network.add_edge(prev_node, next_node, weight = edgeweight)
def add_connector_loss(network, fibers, default_con_in, default_con_out, EOL):
for fiber in fibers:
try:
next_node = next(network.successors(fiber))
except StopIteration:
raise NetworkTopologyError(f'Fiber {fiber.uid} is not properly connected, please check network topology')
if fiber.params.con_in is None:
fiber.params.con_in = default_con_in
if fiber.params.con_out is None:
fiber.params.con_out = default_con_out
if not isinstance(next_node, elements.Fused):
fiber.params.con_out += EOL
if fiber.con_in is None: fiber.con_in = default_con_in
if fiber.con_out is None: fiber.con_out = default_con_out
next_node = next(n for n in network.successors(fiber))
if not isinstance(next_node, Fused):
fiber.con_out += EOL
def add_fiber_padding(network, fibers, padding):
"""last_fibers = (fiber for n in network.nodes()
if not (isinstance(n, elements.Fiber) or isinstance(n, elements.Fused))
if not (isinstance(n, Fiber) or isinstance(n, Fused))
for fiber in network.predecessors(n)
if isinstance(fiber, elements.Fiber))"""
if isinstance(fiber, Fiber))"""
for fiber in fibers:
this_span_loss = span_loss(network, fiber)
try:
next_node = next(network.successors(fiber))
except StopIteration:
raise NetworkTopologyError(f'Fiber {fiber.uid} is not properly connected, please check network topology')
if isinstance(next_node, elements.Fused):
continue
this_span_loss = span_loss(network, fiber)
if this_span_loss < padding:
# add a padding att_in at the input of the 1st fiber:
# address the case when several fibers are spliced together
if this_span_loss < padding and not (isinstance(next_node, Fused)):
#add a padding att_in at the input of the 1st fiber:
#address the case when several fibers are spliced together
first_fiber = find_first_node(network, fiber)
# in order to support no booster , fused might be placed
# just after a roadm: need to check that first_fiber is really a fiber
if isinstance(first_fiber, elements.Fiber):
first_fiber.params.att_in = first_fiber.params.att_in + padding - this_span_loss
if isinstance(first_fiber,Fiber):
if first_fiber.att_in is None:
first_fiber.att_in = padding - this_span_loss
else:
first_fiber.att_in = first_fiber.att_in + padding - this_span_loss
def build_network(network, equipment, pref_ch_db, pref_total_db, no_insert_edfas=False):
def build_network(network, equipment, pref_ch_db, pref_total_db):
default_span_data = equipment['Span']['default']
max_length = int(convert_length(default_span_data.max_length, default_span_data.length_units))
min_length = max(int(default_span_data.padding / 0.2 * 1e3), 50_000)
max_length = int(default_span_data.max_length * UNITS[default_span_data.length_units])
min_length = max(int(default_span_data.padding/0.2*1e3),50_000)
bounds = range(min_length, max_length)
target_length = max(min_length, min(max_length, 90_000))
target_length = max(min_length, 90_000)
default_con_in = default_span_data.con_in
default_con_out = default_span_data.con_out
padding = default_span_data.padding
# set roadm loss for gain_mode before to build network
fibers = [f for f in network.nodes() if isinstance(f, elements.Fiber)]
add_connector_loss(network, fibers, default_span_data.con_in, default_span_data.con_out, default_span_data.EOL)
#set roadm loss for gain_mode before to build network
fibers = [f for f in network.nodes() if isinstance(f, Fiber)]
add_connector_loss(network, fibers, default_con_in, default_con_out, default_span_data.EOL)
add_fiber_padding(network, fibers, padding)
# don't group split fiber and add amp in the same loop
# =>for code clarity (at the expense of speed):
for fiber in fibers:
split_fiber(network, fiber, bounds, target_length, equipment)
roadms = [r for r in network.nodes() if isinstance(r, elements.Roadm)]
amplified_nodes = [n for n in network.nodes()
if isinstance(n, Fiber) or isinstance(n, Roadm)]
if not no_insert_edfas:
for fiber in fibers:
split_fiber(network, fiber, bounds, target_length, equipment)
for roadm in roadms:
add_roadm_preamp(network, roadm)
add_roadm_booster(network, roadm)
fibers = [f for f in network.nodes() if isinstance(f, elements.Fiber)]
for fiber in fibers:
add_inline_amplifier(network, fiber)
add_fiber_padding(network, fibers, default_span_data.padding)
for node in amplified_nodes:
add_egress_amplifier(network, node)
roadms = [r for r in network.nodes() if isinstance(r, Roadm)]
for roadm in roadms:
set_egress_amplifier(network, roadm, equipment, pref_ch_db, pref_total_db)
set_egress_amplifier(network, roadm, equipment, pref_total_db)
trx = [t for t in network.nodes() if isinstance(t, elements.Transceiver)]
for t in trx:
next_node = next(network.successors(t), None)
if next_node and not isinstance(next_node, elements.Roadm):
set_egress_amplifier(network, t, equipment, 0, pref_total_db)
#support older json input topology wo Roadms:
if len(roadms) == 0:
trx = [t for t in network.nodes() if isinstance(t, Transceiver)]
for t in trx:
set_egress_amplifier(network, t, equipment, pref_total_db)
def load_sim_params(filename):
sim_params = load_json(filename)
return SimParams(params=sim_params)
def configure_network(network, sim_params):
for node in network.nodes:
if isinstance(node, RamanFiber):
node.sim_params = sim_params

56
gnpy/core/node.py Normal file
View File

@@ -0,0 +1,56 @@
#! /bin/usr/python3
# -*- coding: utf-8 -*-
'''
gnpy.core.node
==============
This module contains the base class for a network element.
Strictly, a network element is any callable which accepts an immutable
:class:`.info.SpectralInformation` object and returns an :class:`.info.SpectralInformation` object
(a copy).
Network elements MUST implement two attributes .uid and .name representing a
unique identifier and a printable name.
This base class provides a more convenient way to define a network element
via subclassing.
'''
from uuid import uuid4
from collections import namedtuple
class Location(namedtuple('Location', 'latitude longitude city region')):
def __new__(cls, latitude=0, longitude=0, city=None, region=None):
return super().__new__(cls, latitude, longitude, city, region)
class Node:
def __init__(self, uid, name=None, params=None, metadata=None, operational=None):
if name is None:
name = uid
self.uid, self.name = uid, name
if metadata is None:
metadata = {'location': {}}
if metadata and not isinstance(metadata.get('location'), Location):
metadata['location'] = Location(**metadata.pop('location', {}))
self.params, self.metadata, self.operational = params, metadata, operational
@property
def coords(self):
return self.lng, self.lat
@property
def location(self):
return self.metadata['location']
loc = location
@property
def longitude(self):
return self.location.longitude
lng = longitude
@property
def latitude(self):
return self.location.latitude
lat = latitude

View File

@@ -1,314 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
gnpy.core.parameters
====================
This module contains all parameters to configure standard network elements.
"""
from scipy.constants import c, pi
from numpy import asarray, array
from gnpy.core.utils import convert_length
from gnpy.core.exceptions import ParametersError
class Parameters:
def asdict(self):
class_dict = self.__class__.__dict__
instance_dict = self.__dict__
new_dict = {}
for key in class_dict:
if isinstance(class_dict[key], property):
new_dict[key] = instance_dict['_' + key]
return new_dict
class PumpParams(Parameters):
def __init__(self, power, frequency, propagation_direction):
self.power = power
self.frequency = frequency
self.propagation_direction = propagation_direction.lower()
class RamanParams(Parameters):
def __init__(self, flag=False, result_spatial_resolution=10e3, solver_spatial_resolution=50):
""" Simulation parameters used within the Raman Solver
:params flag: boolean for enabling/disable the evaluation of the Raman power profile in frequency and position
:params result_spatial_resolution: spatial resolution of the evaluated Raman power profile
:params solver_spatial_resolution: spatial step for the iterative solution of the first order ode
"""
self.flag = flag
self.result_spatial_resolution = result_spatial_resolution # [m]
self.solver_spatial_resolution = solver_spatial_resolution # [m]
class NLIParams(Parameters):
def __init__(self, method='gn_model_analytic', dispersion_tolerance=1, phase_shift_tolerance=0.1,
computed_channels=None):
""" Simulation parameters used within the Nli Solver
:params method: formula for NLI calculation
:params dispersion_tolerance: tuning parameter for ggn model solution
:params phase_shift_tolerance: tuning parameter for ggn model solution
:params computed_channels: the NLI is evaluated for these channels and extrapolated for the others
"""
self.method = method.lower()
self.dispersion_tolerance = dispersion_tolerance
self.phase_shift_tolerance = phase_shift_tolerance
self.computed_channels = computed_channels
class SimParams(Parameters):
_shared_dict = {'nli_params': NLIParams(), 'raman_params': RamanParams()}
def __init__(self):
if type(self) == SimParams:
raise NotImplementedError('Instances of SimParams cannot be generated')
@classmethod
def set_params(cls, sim_params):
cls._shared_dict['nli_params'] = NLIParams(**sim_params.get('nli_params', {}))
cls._shared_dict['raman_params'] = RamanParams(**sim_params.get('raman_params', {}))
@classmethod
def get(cls):
self = cls.__new__(cls)
return self
@property
def nli_params(self):
return self._shared_dict['nli_params']
@property
def raman_params(self):
return self._shared_dict['raman_params']
class RoadmParams(Parameters):
def __init__(self, **kwargs):
try:
self.target_pch_out_db = kwargs['target_pch_out_db']
self.add_drop_osnr = kwargs['add_drop_osnr']
self.pmd = kwargs['pmd']
self.pdl = kwargs['pdl']
self.restrictions = kwargs['restrictions']
self.per_degree_pch_out_db = kwargs['per_degree_pch_out_db'] if 'per_degree_pch_out_db' in kwargs else {}
except KeyError as e:
raise ParametersError(f'ROADM configurations must include {e}. Configuration: {kwargs}')
class FusedParams(Parameters):
def __init__(self, **kwargs):
self.loss = kwargs['loss'] if 'loss' in kwargs else 1
# SSMF Raman coefficient profile normalized with respect to the effective area (Cr * A_eff)
CR_NORM = array([
0., 7.802e-16, 2.4236e-15, 4.0504e-15, 5.6606e-15, 6.8973e-15, 7.802e-15, 8.4162e-15, 8.8727e-15, 9.2877e-15,
1.01011e-14, 1.05244e-14, 1.13295e-14, 1.2367e-14, 1.3695e-14, 1.5023e-14, 1.64091e-14, 1.81936e-14, 2.04927e-14,
2.28167e-14, 2.48917e-14, 2.66098e-14, 2.82615e-14, 2.98136e-14, 3.1042e-14, 3.17558e-14, 3.18803e-14, 3.17558e-14,
3.15566e-14, 3.11748e-14, 2.94567e-14, 3.14985e-14, 2.8552e-14, 2.43439e-14, 1.67992e-14, 9.6114e-15, 7.02180e-15,
5.9262e-15, 5.6938e-15, 7.055e-15, 7.4119e-15, 7.4783e-15, 6.7645e-15, 5.5361e-15, 3.6271e-15, 2.7224e-15,
2.4568e-15, 2.1995e-15, 2.1331e-15, 2.3323e-15, 2.5564e-15, 3.0461e-15, 4.8555e-15, 5.5029e-15, 5.2788e-15,
4.565e-15, 3.3698e-15, 2.2991e-15, 2.0086e-15, 1.5521e-15, 1.328e-15, 1.162e-15, 9.379e-16, 8.715e-16, 8.134e-16,
8.134e-16, 9.379e-16, 1.3612e-15, 1.6185e-15, 1.9754e-15, 1.8758e-15, 1.6849e-15, 1.2284e-15, 9.047e-16, 8.134e-16,
8.715e-16, 9.711e-16, 1.0375e-15, 1.0043e-15, 9.047e-16, 8.134e-16, 6.806e-16, 5.478e-16, 3.901e-16, 2.241e-16,
1.577e-16, 9.96e-17, 3.32e-17, 1.66e-17, 8.3e-18])
# Note the non-uniform spacing of this range; this is required for properly capturing the Raman peak shape.
FREQ_OFFSET = array([
0., 0.5, 1., 1.5, 2., 2.5, 3., 3.5, 4., 4.5, 5., 5.5, 6., 6.5, 7., 7.5, 8., 8.5, 9., 9.5, 10., 10.5, 11., 11.5, 12.,
12.5, 12.75, 13., 13.25, 13.5, 14., 14.5, 14.75, 15., 15.5, 16., 16.5, 17., 17.5, 18., 18.25, 18.5, 18.75, 19.,
19.5, 20., 20.5, 21., 21.5, 22., 22.5, 23., 23.5, 24., 24.5, 25., 25.5, 26., 26.5, 27., 27.5, 28., 28.5, 29., 29.5,
30., 30.5, 31., 31.5, 32., 32.5, 33., 33.5, 34., 34.5, 35., 35.5, 36., 36.5, 37., 37.5, 38., 38.5, 39., 39.5, 40.,
40.5, 41., 41.5, 42.]) * 1e12
class FiberParams(Parameters):
def __init__(self, **kwargs):
try:
self._length = convert_length(kwargs['length'], kwargs['length_units'])
# fixed attenuator for padding
self._att_in = kwargs.get('att_in', 0)
# if not defined in the network json connector loss in/out
# the None value will be updated in network.py[build_network]
# with default values from eqpt_config.json[Spans]
self._con_in = kwargs.get('con_in')
self._con_out = kwargs.get('con_out')
if 'ref_wavelength' in kwargs:
self._ref_wavelength = kwargs['ref_wavelength']
self._ref_frequency = c / self._ref_wavelength
elif 'ref_frequency' in kwargs:
self._ref_frequency = kwargs['ref_frequency']
self._ref_wavelength = c / self._ref_frequency
else:
self._ref_wavelength = 1550e-9 # conventional central C band wavelength [m]
self._ref_frequency = c / self._ref_wavelength
self._dispersion = kwargs['dispersion'] # s/m/m
self._dispersion_slope = \
kwargs.get('dispersion_slope', -2 * self._dispersion / self.ref_wavelength) # s/m/m/m
self._beta2 = -(self.ref_wavelength ** 2) * self.dispersion / (2 * pi * c) # 1/(m * Hz^2)
# Eq. (3.23) in Abramczyk, Halina. "Dispersion phenomena in optical fibers." Virtual European University
# on Lasers. Available online: http://mitr.p.lodz.pl/evu/lectures/Abramczyk3.pdf
# (accessed on 25 March 2018) (2005).
self._beta3 = ((self.dispersion_slope - (4*pi*c/self.ref_wavelength**3) * self.beta2) /
(2*pi*c/self.ref_wavelength**2)**2)
self._effective_area = kwargs.get('effective_area') # m^2
n2 = 2.6e-20 # m^2/W
if self._effective_area:
self._gamma = kwargs.get('gamma', 2 * pi * n2 / (self.ref_wavelength * self._effective_area)) # 1/W/m
elif 'gamma' in kwargs:
self._gamma = kwargs['gamma'] # 1/W/m
self._effective_area = 2 * pi * n2 / (self.ref_wavelength * self._gamma) # m^2
else:
self._gamma = 0 # 1/W/m
self._effective_area = 83e-12 # m^2
default_raman_efficiency = {'cr': CR_NORM / self._effective_area, 'frequency_offset': FREQ_OFFSET}
self._raman_efficiency = kwargs.get('raman_efficiency', default_raman_efficiency)
self._pmd_coef = kwargs['pmd_coef'] # s/sqrt(m)
if type(kwargs['loss_coef']) == dict:
self._loss_coef = asarray(kwargs['loss_coef']['value']) * 1e-3 # lineic loss dB/m
self._f_loss_ref = asarray(kwargs['loss_coef']['frequency']) # Hz
else:
self._loss_coef = asarray(kwargs['loss_coef']) * 1e-3 # lineic loss dB/m
self._f_loss_ref = asarray(self._ref_frequency) # Hz
self._lumped_losses = kwargs['lumped_losses'] if 'lumped_losses' in kwargs else []
except KeyError as e:
raise ParametersError(f'Fiber configurations json must include {e}. Configuration: {kwargs}')
@property
def length(self):
return self._length
@length.setter
def length(self, length):
"""length must be in m"""
self._length = length
@property
def att_in(self):
return self._att_in
@att_in.setter
def att_in(self, att_in):
self._att_in = att_in
@property
def con_in(self):
return self._con_in
@con_in.setter
def con_in(self, con_in):
self._con_in = con_in
@property
def con_out(self):
return self._con_out
@property
def lumped_losses(self):
return self._lumped_losses
@con_out.setter
def con_out(self, con_out):
self._con_out = con_out
@property
def dispersion(self):
return self._dispersion
@property
def dispersion_slope(self):
return self._dispersion_slope
@property
def gamma(self):
return self._gamma
@property
def pmd_coef(self):
return self._pmd_coef
@property
def ref_wavelength(self):
return self._ref_wavelength
@property
def ref_frequency(self):
return self._ref_frequency
@property
def beta2(self):
return self._beta2
@property
def beta3(self):
return self._beta3
@property
def loss_coef(self):
return self._loss_coef
@property
def f_loss_ref(self):
return self._f_loss_ref
@property
def raman_efficiency(self):
return self._raman_efficiency
def asdict(self):
dictionary = super().asdict()
dictionary['loss_coef'] = self.loss_coef * 1e3
dictionary['length_units'] = 'm'
if not self.lumped_losses:
dictionary.pop('lumped_losses')
if not self.raman_efficiency:
dictionary.pop('raman_efficiency')
return dictionary
class EdfaParams:
def __init__(self, **params):
self.update_params(params)
if params == {}:
self.type_variety = ''
self.type_def = ''
# self.gain_flatmax = 0
# self.gain_min = 0
# self.p_max = 0
# self.nf_model = None
# self.nf_fit_coeff = None
# self.nf_ripple = None
# self.dgt = None
# self.gain_ripple = None
# self.out_voa_auto = False
# self.allowed_for_design = None
def update_params(self, kwargs):
for k, v in kwargs.items():
setattr(self, k, self.update_params(**v) if isinstance(v, dict) else v)
class EdfaOperational:
default_values = {
'gain_target': None,
'delta_p': None,
'out_voa': None,
'tilt_target': 0
}
def __init__(self, **operational):
self.update_attr(operational)
def update_attr(self, kwargs):
clean_kwargs = {k: v for k, v in kwargs.items() if v != ''}
for k, v in self.default_values.items():
setattr(self, k, clean_kwargs.get(k, v))
def __repr__(self):
return (f'{type(self).__name__}('
f'gain_target={self.gain_target!r}, '
f'tilt_target={self.tilt_target!r})')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

268
gnpy/core/service_sheet.py Normal file
View File

@@ -0,0 +1,268 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
gnpy.core.service_sheet
========================
XLS parser that can be called to create a JSON request file in accordance with
Yang model for requesting path computation.
See: draft-ietf-teas-yang-path-computation-01.txt
"""
from sys import exit
try:
from xlrd import open_workbook, XL_CELL_EMPTY
except ModuleNotFoundError:
exit('Required: `pip install xlrd`')
from collections import namedtuple
from logging import getLogger, basicConfig, CRITICAL, DEBUG, INFO
from json import dumps
from pathlib import Path
from gnpy.core.equipment import load_equipment
from gnpy.core.utils import db2lin, lin2db
from gnpy.core.exceptions import ServiceError
SERVICES_COLUMN = 12
#EQPT_LIBRARY_FILENAME = Path(__file__).parent / 'eqpt_config.json'
all_rows = lambda sheet, start=0: (sheet.row(x) for x in range(start, sheet.nrows))
logger = getLogger(__name__)
# Type for input data
class Request(namedtuple('Request', 'request_id source destination trx_type mode \
spacing power nb_channel disjoint_from nodes_list is_loose path_bandwidth')):
def __new__(cls, request_id, source, destination, trx_type, mode=None , spacing= None , power = None, nb_channel = None , disjoint_from ='' , nodes_list = None, is_loose = '', path_bandwidth = None):
return super().__new__(cls, request_id, source, destination, trx_type, mode, spacing, power, nb_channel, disjoint_from, nodes_list, is_loose, path_bandwidth)
# Type for output data: // from dutc
class Element:
def __eq__(self, other):
return type(self) == type(other) and self.uid == other.uid
def __hash__(self):
return hash((type(self), self.uid))
class Request_element(Element):
def __init__(self, Request, eqpt_filename, bidir):
# request_id is str
# excel has automatic number formatting that adds .0 on integer values
# the next lines recover the pure int value, assuming this .0 is unwanted
self.request_id = correct_xlrd_int_to_str_reading(Request.request_id)
self.source = f'trx {Request.source}'
self.destination = f'trx {Request.destination}'
# TODO: the automatic naming generated by excel parser requires that source and dest name
# be a string starting with 'trx' : this is manually added here.
self.srctpid = f'trx {Request.source}'
self.dsttpid = f'trx {Request.destination}'
self.bidir = bidir
# test that trx_type belongs to eqpt_config.json
# if not replace it with a default
equipment = load_equipment(eqpt_filename)
try :
if equipment['Transceiver'][Request.trx_type]:
self.trx_type = correct_xlrd_int_to_str_reading(Request.trx_type)
if Request.mode is not None :
Requestmode = correct_xlrd_int_to_str_reading(Request.mode)
if [mode for mode in equipment['Transceiver'][Request.trx_type].mode if mode['format'] == Requestmode]:
self.mode = Requestmode
else :
msg = f'Request Id: {self.request_id} - could not find tsp : \'{Request.trx_type}\' with mode: \'{Requestmode}\' in eqpt library \nComputation stopped.'
#print(msg)
logger.critical(msg)
exit(1)
else:
Requestmode = None
self.mode = Request.mode
except KeyError:
msg = f'Request Id: {self.request_id} - could not find tsp : \'{Request.trx_type}\' with mode: \'{Request.mode}\' in eqpt library \nComputation stopped.'
#print(msg)
logger.critical(msg)
raise ServiceError(msg)
# excel input are in GHz and dBm
if Request.spacing is not None:
self.spacing = Request.spacing * 1e9
else:
msg = f'Request {self.request_id} missing spacing: spacing is mandatory.\ncomputation stopped'
logger.critical(msg)
raise ServiceError(msg)
if Request.power is not None:
self.power = db2lin(Request.power) * 1e-3
else:
self.power = None
if Request.nb_channel is not None :
self.nb_channel = int(Request.nb_channel)
else:
self.nb_channel = None
value = correct_xlrd_int_to_str_reading(Request.disjoint_from)
self.disjoint_from = [n for n in value.split(' | ') if value]
self.nodes_list = []
if Request.nodes_list :
self.nodes_list = Request.nodes_list.split(' | ')
# cleaning the list of nodes to remove source and destination
# (because the remaining of the program assumes that the nodes list are nodes
# on the path and should not include source and destination)
try :
self.nodes_list.remove(self.source)
msg = f'{self.source} removed from explicit path node-list'
logger.info(msg)
except ValueError:
msg = f'{self.source} already removed from explicit path node-list'
logger.info(msg)
try :
self.nodes_list.remove(self.destination)
msg = f'{self.destination} removed from explicit path node-list'
logger.info(msg)
except ValueError:
msg = f'{self.destination} already removed from explicit path node-list'
logger.info(msg)
# the excel parser applies the same hop-type to all nodes in the route nodes_list.
# user can change this per node in the generated json
self.loose = 'LOOSE'
if Request.is_loose == 'no' :
self.loose = 'STRICT'
self.path_bandwidth = None
if Request.path_bandwidth is not None:
self.path_bandwidth = Request.path_bandwidth * 1e9
else:
self.path_bandwidth = 0
uid = property(lambda self: repr(self))
@property
def pathrequest(self):
# Default assumption for bidir is False
req_dictionnary = {
'request-id':self.request_id,
'source': self.source,
'destination': self.destination,
'src-tp-id': self.srctpid,
'dst-tp-id': self.dsttpid,
'bidirectional': self.bidir,
'path-constraints':{
'te-bandwidth': {
'technology': 'flexi-grid',
'trx_type' : self.trx_type,
'trx_mode' : self.mode,
'effective-freq-slot':[{'N': 'null', 'M': 'null'}],
'spacing' : self.spacing,
'max-nb-of-channel' : self.nb_channel,
'output-power' : self.power
}
}
}
if self.nodes_list:
req_dictionnary['explicit-route-objects'] = {}
temp = {'route-object-include-exclude' : [
{'explicit-route-usage': 'route-include-ero',
'index': self.nodes_list.index(node),
'num-unnum-hop': {
'node-id': f'{node}',
'link-tp-id': 'link-tp-id is not used',
'hop-type': f'{self.loose}',
}
}
for node in self.nodes_list]
}
req_dictionnary['explicit-route-objects'] = temp
if self.path_bandwidth is not None:
req_dictionnary['path-constraints']['te-bandwidth']['path_bandwidth'] = self.path_bandwidth
return req_dictionnary
@property
def pathsync(self):
if self.disjoint_from :
return {'synchronization-id':self.request_id,
'svec': {
'relaxable' : 'false',
'disjointness': 'node link',
'request-id-number': [self.request_id]+ [n for n in self.disjoint_from]
}
}
else:
return None
# TO-DO: avoid multiple entries with same synchronisation vectors
@property
def json(self):
return self.pathrequest , self.pathsync
def convert_service_sheet(input_filename, eqpt_filename, output_filename='', bidir=False, filter_region=None):
""" converts a service sheet into a json structure
"""
if filter_region is None:
filter_region = []
service = parse_excel(input_filename)
req = [Request_element(n, eqpt_filename, bidir) for n in service]
# dumps the output into a json file with name
# split_filename = [input_filename[0:len(input_filename)-len(suffix_filename)] , suffix_filename[1:]]
if output_filename=='':
output_filename = f'{str(input_filename)[0:len(str(input_filename))-len(str(input_filename.suffixes[0]))]}_services.json'
# for debug
# print(json_filename)
# if there is no sync vector , do not write any synchronization
synchro = [n.json[1] for n in req if n.json[1] is not None]
if synchro:
data = {
'path-request': [n.json[0] for n in req],
'synchronization': synchro
}
else:
data = {
'path-request': [n.json[0] for n in req]
}
with open(output_filename, 'w', encoding='utf-8') as f:
f.write(dumps(data, indent=2, ensure_ascii=False))
return data
def correct_xlrd_int_to_str_reading(v) :
if not isinstance(v,str):
value = str(int(v))
if value.endswith('.0'):
value = value[:-2]
else:
value = v
return value
# to be used from dutc
def parse_row(row, fieldnames):
return {f: r.value for f, r in zip(fieldnames, row[0:SERVICES_COLUMN])
if r.ctype != XL_CELL_EMPTY}
#
def parse_excel(input_filename):
with open_workbook(input_filename) as wb:
service_sheet = wb.sheet_by_name('Service')
services = list(parse_service_sheet(service_sheet))
return services
def parse_service_sheet(service_sheet):
""" reads each column according to authorized fieldnames. order is not important.
"""
logger.info(f'Validating headers on {service_sheet.name!r}')
# add a test on field to enable the '' field case that arises when columns on the
# right hand side are used as comments or drawing in the excel sheet
header = [x.value.strip() for x in service_sheet.row(4)[0:SERVICES_COLUMN]
if len(x.value.strip()) > 0]
# create a service_fieldname independant from the excel column order
# to be compatible with any version of the sheet
# the following dictionnary records the excel field names and the corresponding parameter's name
authorized_fieldnames = {
'route id':'request_id', 'Source':'source', 'Destination':'destination', \
'TRX type':'trx_type', 'Mode' : 'mode', 'System: spacing':'spacing', \
'System: input power (dBm)':'power', 'System: nb of channels':'nb_channel',\
'routing: disjoint from': 'disjoint_from', 'routing: path':'nodes_list',\
'routing: is loose?':'is_loose', 'path bandwidth':'path_bandwidth'}
try:
service_fieldnames = [authorized_fieldnames[e] for e in header]
except KeyError:
msg = f'Malformed header on Service sheet: {header} field not in {authorized_fieldnames}'
logger.critical(msg)
raise ValueError(msg)
for row in all_rows(service_sheet, start=5):
yield Request(**parse_row(row[0:SERVICES_COLUMN], service_fieldnames))

View File

@@ -2,12 +2,12 @@
# -*- coding: utf-8 -*-
"""
gnpy.topology.spectrum_assignment
=================================
gnpy.core.spectrum_assignment
=============================
This module contains the :class:`Oms` and :class:`Bitmap` classes and methods to
select and assign spectrum. The :func:`spectrum_selection` function identifies the free
slots and :func:`select_candidate` selects the candidate spectrum according to
This module contains the Oms and Bitmap classes and the different method to
select and assign spectrum. Spectrum_selection function identifies the free
slots and select_candidate selects the candidate spectrum according to
strategy: for example first fit
oms records its elements, and elements are updated with an oms to have
element/oms correspondace
@@ -15,30 +15,28 @@ element/oms correspondace
from collections import namedtuple
from logging import getLogger
from math import ceil
from gnpy.core.elements import Roadm, Transceiver
from gnpy.core.exceptions import ServiceError, SpectrumError
from gnpy.topology.request import compute_spectrum_slot_vs_bandwidth
from gnpy.core.exceptions import SpectrumError
LOGGER = getLogger(__name__)
class Bitmap:
""" records the spectrum occupation
"""
def __init__(self, f_min, f_max, grid, guardband=0.15e12, bitmap=None):
# n is the min index including guardband. Guardband is require to be sure
# that a channel can be assigned with center frequency fmin (means that its
# slot occupation goes below freq_index_min
n_min = frequency_to_n(f_min - guardband, grid)
n_max = frequency_to_n(f_max + guardband, grid) - 1
n_min = frequency_to_n(f_min-guardband, grid)
n_max = frequency_to_n(f_max+guardband, grid) - 1
self.n_min = n_min
self.n_max = n_max
self.freq_index_min = frequency_to_n(f_min)
self.freq_index_max = frequency_to_n(f_max)
self.freq_index = list(range(n_min, n_max + 1))
self.freq_index = list(range(n_min, n_max+1))
if bitmap is None:
self.bitmap = [1] * (n_max - n_min + 1)
self.bitmap = [1] * (n_max-n_min+1)
elif len(bitmap) == len(self.freq_index):
self.bitmap = bitmap
else:
@@ -48,37 +46,31 @@ class Bitmap:
""" converts the n (itu grid) into a local index
"""
return self.freq_index[i]
def geti(self, nvalue):
""" converts the local index into n (itu grid)
"""
return self.freq_index.index(nvalue)
def insert_left(self, newbitmap):
""" insert bitmap on the left to align oms bitmaps if their start frequencies are different
"""
self.bitmap = newbitmap + self.bitmap
temp = list(range(self.n_min - len(newbitmap), self.n_min))
temp = list(range(self.n_min-len(newbitmap), self.n_min))
self.freq_index = temp + self.freq_index
self.n_min = self.freq_index[0]
def insert_right(self, newbitmap):
""" insert bitmap on the right to align oms bitmaps if their stop frequencies are different
"""
self.bitmap = self.bitmap + newbitmap
self.freq_index = self.freq_index + list(range(self.n_max, self.n_max + len(newbitmap)))
self.freq_index = self.freq_index + list(range(self.n_max, self.n_max+len(newbitmap)))
self.n_max = self.freq_index[-1]
# +'grid available_slots f_min f_max services_list')
OMSParams = namedtuple('OMSParams', 'oms_id el_id_list el_list')
class OMS:
""" OMS class is the logical container that represent a link between two adjacent ROADMs and
records the crossed elements and the occupied spectrum
"""
def __init__(self, *args, **params):
params = OMSParams(**params)
self.oms_id = params.oms_id
@@ -88,14 +80,12 @@ class OMS:
self.nb_channels = 0
self.service_list = []
# TODO
def __str__(self):
return '\n\t'.join([f'{type(self).__name__} {self.oms_id}',
f'{self.el_id_list[0]} - {self.el_id_list[-1]}'])
f'{self.el_id_list[0]} - {self.el_id_list[-1]}'])
def __repr__(self):
return '\n\t'.join([f'{type(self).__name__} {self.oms_id}',
f'{self.el_id_list[0]} - {self.el_id_list[-1]}', '\n'])
f'{self.el_id_list[0]} - {self.el_id_list[-1]}', '\n'])
def add_element(self, elem):
""" records oms elements
@@ -128,22 +118,25 @@ class OMS:
def assign_spectrum(self, nvalue, mvalue):
""" change oms spectrum to mark spectrum assigned
"""
if not isinstance(nvalue, int):
raise SpectrumError(f'N must be a signed integer, got {nvalue}')
if not isinstance(mvalue, int):
raise SpectrumError(f'M must be an integer, got {mvalue}')
if mvalue <= 0:
raise SpectrumError(f'M must be positive, got {mvalue}')
if nvalue > self.spectrum_bitmap.freq_index_max:
raise SpectrumError(f'N {nvalue} over the upper spectrum boundary')
if nvalue < self.spectrum_bitmap.freq_index_min:
raise SpectrumError(f'N {nvalue} below the lower spectrum boundary')
if (nvalue is None or mvalue is None or isinstance(nvalue, float)
or isinstance(mvalue, float) or mvalue == 0):
raise SpectrumError('could not assign None values')
startn, stopn = mvalue_to_slots(nvalue, mvalue)
if stopn > self.spectrum_bitmap.n_max:
raise SpectrumError(f'N {nvalue}, M {mvalue} over the N spectrum bitmap bounds')
if startn <= self.spectrum_bitmap.n_min:
raise SpectrumError(f'N {nvalue}, M {mvalue} below the N spectrum bitmap bounds')
self.spectrum_bitmap.bitmap[self.spectrum_bitmap.geti(startn):self.spectrum_bitmap.geti(stopn) + 1] = [0] * (stopn - startn + 1)
# print(f'startn stop n {startn} , {stopn}')
# assumes that guardbands are sufficient to ensure that assigning a center channel
# at fmin or fmax is OK is startn > self.spectrum_bitmap.n_min
if (nvalue <= self.spectrum_bitmap.freq_index_max and
nvalue >= self.spectrum_bitmap.freq_index_min and
stopn <= self.spectrum_bitmap.n_max and
startn > self.spectrum_bitmap.n_min):
# verification that both length are identical
self.spectrum_bitmap.bitmap[self.spectrum_bitmap.geti(startn):self.spectrum_bitmap.geti(stopn)+1] = [0] * (stopn-startn+1)
return True
else:
msg = f'Could not assign n {nvalue}, m {mvalue} values:' +\
f' one or several slots are not available'
LOGGER.info(msg)
return False
def add_service(self, service_id, nb_wl):
""" record service and mark spectrum as occupied
@@ -151,79 +144,38 @@ class OMS:
self.service_list.append(service_id)
self.nb_channels += nb_wl
def frequency_to_n(freq, grid=0.00625e12):
""" converts frequency into the n value (ITU grid)
reference to Recommendation G.694.1 (02/12), Figure I.3
https://www.itu.int/rec/T-REC-G.694.1-201202-I/en
>>> frequency_to_n(193.1375e12)
6
>>> frequency_to_n(193.225e12)
20
"""
return (int)((freq - 193.1e12) / grid)
return (int)((freq-193.1e12)/grid)
def nvalue_to_frequency(nvalue, grid=0.00625e12):
""" converts n value into a frequency
reference to Recommendation G.694.1 (02/12), Table 1
https://www.itu.int/rec/T-REC-G.694.1-201202-I/en
>>> nvalue_to_frequency(6)
193137500000000.0
>>> nvalue_to_frequency(-1, 0.1e12)
193000000000000.0
"""
return 193.1e12 + nvalue * grid
def mvalue_to_slots(nvalue, mvalue):
""" convert center n an m into start and stop n
"""
startn = nvalue - mvalue
stopn = nvalue + mvalue - 1
stopn = nvalue + mvalue -1
return startn, stopn
def slots_to_m(startn, stopn):
""" converts the start and stop n values to the center n and m value
reference to Recommendation G.694.1 (02/12), Figure I.3
https://www.itu.int/rec/T-REC-G.694.1-201202-I/en
>>> nval, mval = slots_to_m(6, 20)
>>> nval
13
>>> mval
7
"""
nvalue = (int)((startn + stopn + 1) / 2)
mvalue = (int)((stopn - startn + 1) / 2)
nvalue = (int)((startn+stopn+1)/2)
mvalue = (int)((stopn-startn+1)/2)
return nvalue, mvalue
def m_to_freq(nvalue, mvalue, grid=0.00625e12):
""" converts m into frequency range
spectrum(13,7) is (193137500000000.0, 193225000000000.0)
reference to Recommendation G.694.1 (02/12), Figure I.3
https://www.itu.int/rec/T-REC-G.694.1-201202-I/en
>>> fstart, fstop = m_to_freq(13, 7)
>>> fstart
193137500000000.0
>>> fstop
193225000000000.0
"""
startn, stopn = mvalue_to_slots(nvalue, mvalue)
fstart = nvalue_to_frequency(startn, grid)
fstop = nvalue_to_frequency(stopn + 1, grid)
fstop = nvalue_to_frequency(stopn+1, grid)
return fstart, fstop
def align_grids(oms_list):
""" used to apply same grid to all oms : same starting n, stop n and slot size
out of grid slots are set to 0
@@ -237,7 +189,6 @@ def align_grids(oms_list):
this_o.spectrum_bitmap.insert_right([0] * (n_max - this_o.spectrum_bitmap.n_max))
return oms_list
def build_oms_list(network, equipment):
""" initialization of OMS list in the network
an oms is build reading all intermediate nodes between two adjacent ROADMs
@@ -251,7 +202,7 @@ def build_oms_list(network, equipment):
for node in [n for n in network.nodes() if isinstance(n, Roadm)]:
for edge in network.edges([node]):
if not isinstance(edge[1], Transceiver):
nd_in = edge[0] # nd_in is a Roadm
nd_in = edge[0] # nd_in is a Roadm
try:
nd_in.oms_list.append(oms_id)
except AttributeError:
@@ -294,7 +245,6 @@ def build_oms_list(network, equipment):
reversed_oms(oms_list)
return oms_list
def reversed_oms(oms_list):
""" identifies reversed OMS
only applicable for non parallel OMS
@@ -312,7 +262,8 @@ def reversed_oms(oms_list):
def bitmap_sum(band1, band2):
"""mark occupied bitmap by 0 if the slot is occupied in band1 or in band2"""
""" a functions that marks occupied bitmap by 0 if the slot is occupied in band1 or in band2
"""
res = []
for i, elem in enumerate(band1):
if band2[i] * elem == 0:
@@ -321,9 +272,15 @@ def bitmap_sum(band1, band2):
res.append(1)
return res
def spectrum_selection(pth, oms_list, requested_m, requested_n=None):
"""Collects spectrum availability and call the select_candidate function"""
""" collects spectrum availability and call the select_candidate function
# step 1 collects pth spectrum availability
# step 2 if n is not None try to assign the spectrum
# if the spectrum is not available then sends back an "error"
# if n is None selects candidate spectrum
# select spectrum that fits the policy ( first fit, random, ABP...)
# step3 returns the selection
"""
# use indexes instead of ITU-T n values
path_oms = []
@@ -346,11 +303,11 @@ def spectrum_selection(pth, oms_list, requested_m, requested_n=None):
freq_availability = bitmap_sum(oms_list[oms].spectrum_bitmap.bitmap, freq_availability)
if requested_n is None:
# avoid slots reserved on the edge 0.15e-12 on both sides -> 24
candidates = [(freq_index[i] + requested_m, freq_index[i], freq_index[i] + 2 * requested_m - 1)
candidates = [(freq_index[i]+requested_m, freq_index[i], freq_index[i]+2*requested_m-1)
for i in range(len(freq_availability))
if freq_availability[i:i + 2 * requested_m] == [1] * (2 * requested_m)
if freq_availability[i:i+2*requested_m] == [1] * (2*requested_m)
and freq_index[i] >= freq_index_min
and freq_index[i + 2 * requested_m - 1] <= freq_index_max]
and freq_index[i+2*requested_m-1] <= freq_index_max]
candidate = select_candidate(candidates, policy='first_fit')
else:
@@ -358,11 +315,11 @@ def spectrum_selection(pth, oms_list, requested_m, requested_n=None):
# print(f'N {requested_n} i {i}')
# print(freq_availability[i-m:i+m] )
# print(freq_index[i-m:i+m])
if (freq_availability[i - requested_m:i + requested_m] == [1] * (2 * requested_m) and
freq_index[i - requested_m] >= freq_index_min
and freq_index[i + requested_m - 1] <= freq_index_max):
if (freq_availability[i-requested_m:i+requested_m] == [1] * (2*requested_m) and
freq_index[i-requested_m] >= freq_index_min
and freq_index[i+requested_m-1] <= freq_index_max):
# candidate is the triplet center_n, startn and stopn
candidate = (requested_n, requested_n - requested_m, requested_n + requested_m - 1)
candidate = (requested_n, requested_n-requested_m, requested_n+requested_m-1)
else:
candidate = (None, None, None)
# print("coucou11")
@@ -373,7 +330,6 @@ def spectrum_selection(pth, oms_list, requested_m, requested_n=None):
# print(candidate)
return candidate, path_oms
def select_candidate(candidates, policy):
""" selects a candidate among all available spectrum
"""
@@ -385,45 +341,46 @@ def select_candidate(candidates, policy):
else:
raise ServiceError('Only first_fit spectrum assignment policy is implemented.')
def pth_assign_spectrum(pths, rqs, oms_list, rpths):
""" basic first fit assignment
if reversed path are provided, means that occupation is bidir
"""
for pth, rq, rpth in zip(pths, rqs, rpths):
for i, pth in enumerate(pths):
# computes the number of channels required
if hasattr(rq, 'blocking_reason'):
rq.N = None
rq.M = None
else:
nb_wl, requested_m = compute_spectrum_slot_vs_bandwidth(rq.path_bandwidth,
rq.spacing, rq.bit_rate)
if getattr(rq, 'M', None) is not None:
# Consistency check between the requested M and path_bandwidth
# M value should be bigger than the computed requested_m (simple estimate)
# TODO: elaborate a more accurate estimate with nb_wl * tx_osnr + possibly guardbands in case of
# superchannel closed packing.
if requested_m > rq.M:
rq.N = None
rq.M = None
rq.blocking_reason = 'NOT_ENOUGH_RESERVED_SPECTRUM'
# need to stop here for this request and not go though spectrum selection process with requested_m
continue
# use the req.M even if requested_m is smaller
requested_m = rq.M
requested_n = getattr(rq, 'N', None)
(center_n, startn, stopn), path_oms = spectrum_selection(pth + rpth, oms_list, requested_m,
requested_n)
# if requested n and m concern already occupied spectrum the previous function returns a None candidate
try:
if rqs[i].blocking_reason:
rqs[i].blocked = True
rqs[i].N = 0
rqs[i].M = 0
except AttributeError:
nb_wl = ceil(rqs[i].path_bandwidth / rqs[i].bit_rate)
# computes the total nb of slots according to requested spacing
# TODO : express superchannels
# assumes that all channels must be grouped
# TODO : enables non contiguous reservation in case of blocking
requested_m = ceil(rqs[i].spacing / 0.0125e12) * nb_wl
# concatenate all path and reversed path elements to derive slots availability
(center_n, startn, stopn), path_oms = spectrum_selection(pth + rpths[i], oms_list, requested_m,
requested_n=None)
# checks that requested_m is fitting startm and stopm
# if not None, center_n and start, stop frequencies are applicable to all oms of pth
# checks that spectrum is not None else indicate blocking reason
if center_n is not None:
# checks that requested_m is fitting startm and stopm
if 2 * requested_m > (stopn - startn + 1):
msg = f'candidate: {(center_n, startn, stopn)} is not consistant ' +\
f'with {requested_m}'
LOGGER.critical(msg)
raise ValueError(msg)
for oms_elem in path_oms:
oms_list[oms_elem].assign_spectrum(center_n, requested_m)
oms_list[oms_elem].add_service(rq.request_id, nb_wl)
rq.N = center_n
rq.M = requested_m
oms_list[oms_elem].add_service(rqs[i].request_id, nb_wl)
rqs[i].blocked = False
rqs[i].N = center_n
rqs[i].M = requested_m
else:
rq.N = None
rq.M = None
rq.blocking_reason = 'NO_SPECTRUM'
rqs[i].blocked = True
rqs[i].N = 0
rqs[i].M = 0
rqs[i].blocking_reason = 'NO_SPECTRUM'

5
gnpy/core/units.py Normal file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
UNITS = {'m': 1,
'km': 1E3}

View File

@@ -1,26 +1,38 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
'''
gnpy.core.utils
===============
This module contains utility functions that are used with gnpy.
"""
'''
import json
from csv import writer
from numpy import pi, cos, sqrt, log10, linspace, zeros, shape, where, logical_and
import numpy as np
from numpy import pi, cos, sqrt, log10
from scipy import constants
from gnpy.core.exceptions import ConfigurationError
def load_json(filename):
with open(filename, 'r', encoding='utf-8') as f:
data = json.load(f)
return data
def save_json(obj, filename):
with open(filename, 'w', encoding='utf-8') as f:
json.dump(obj, f, indent=2, ensure_ascii=False)
def write_csv(obj, filename):
"""
Convert dictionary items to a CSV file the dictionary format:
::
convert dictionary items to a csv file
the dictionary format :
{'result category 1':
{'result category 1':
[
# 1st line of results
{'header 1' : value_xxx,
@@ -29,113 +41,67 @@ def write_csv(obj, filename):
{'header 1' : value_www,
'header 2' : value_zzz}
],
'result_category 2':
'result_category 2':
[
{},{}
]
}
}
The generated csv file will be:
::
result_category 1
header 1 header 2
value_xxx value_yyy
value_www value_zzz
result_category 2
...
the generated csv file will be:
result_category 1
header 1 header 2
value_xxx value_yyy
value_www value_zzz
result_category 2
...
"""
with open(filename, 'w', encoding='utf-8') as f:
w = writer(f)
for data_key, data_list in obj.items():
# main header
#main header
w.writerow([data_key])
# sub headers:
#sub headers:
headers = [_ for _ in data_list[0].keys()]
w.writerow(headers)
for data_dict in data_list:
w.writerow([_ for _ in data_dict.values()])
def c():
"""
Returns the speed of light in meters per second
"""
return constants.c
def arrange_frequencies(length, start, stop):
"""Create an array of frequencies
:param length: number of elements
:param start: Start frequency in THz
:param star: Start frequency in THz
:param stop: Stop frequency in THz
:type length: integer
:type start: float
:type stop: float
:return: an array of frequencies determined by the spacing parameter
:return an array of frequencies determined by the spacing parameter
:rtype: numpy.ndarray
"""
return linspace(start, stop, length)
return np.linspace(start, stop, length)
def h():
"""
Returns plank's constant in J*s
"""
return constants.h
def lin2db(value):
"""Convert linear unit to logarithmic (dB)
>>> lin2db(0.001)
-30.0
>>> round(lin2db(1.0), 2)
0.0
>>> round(lin2db(1.26), 2)
1.0
>>> round(lin2db(10.0), 2)
10.0
>>> round(lin2db(100.0), 2)
20.0
"""
return 10 * log10(value)
def db2lin(value):
"""Convert logarithimic units to linear
>>> round(db2lin(10.0), 2)
10.0
>>> round(db2lin(20.0), 2)
100.0
>>> round(db2lin(1.0), 2)
1.26
>>> round(db2lin(0.0), 2)
1.0
>>> round(db2lin(-10.0), 2)
0.1
"""
return 10**(value / 10)
def round2float(number, step):
"""Round a floating point number so that its "resolution" is not bigger than 'step'
The finest step is fixed at 0.01; smaller values are silently changed to 0.01.
>>> round2float(123.456, 1000)
0.0
>>> round2float(123.456, 100)
100.0
>>> round2float(123.456, 10)
120.0
>>> round2float(123.456, 1)
123.0
>>> round2float(123.456, 0.1)
123.5
>>> round2float(123.456, 0.01)
123.46
>>> round2float(123.456, 0.001)
123.46
>>> round2float(123.249, 0.5)
123.0
>>> round2float(123.250, 0.5)
123.0
>>> round2float(123.251, 0.5)
123.5
>>> round2float(123.300, 0.2)
123.2
>>> round2float(123.301, 0.2)
123.4
"""
step = round(step, 1)
if step >= 0.01:
number = round(number / step, 0)
@@ -144,28 +110,19 @@ def round2float(number, step):
number = round(number, 2)
return number
wavelength2freq = constants.lambda2nu
freq2wavelength = constants.nu2lambda
def freq2wavelength(value):
""" Converts frequency units to wavelength units.
>>> round(freq2wavelength(191.35e12) * 1e9, 3)
1566.723
>>> round(freq2wavelength(196.1e12) * 1e9, 3)
1528.773
"""
return constants.c / value
return c() / value
def snr_sum(snr, bw, snr_added, bw_added=12.5e9):
snr_added = snr_added - lin2db(bw / bw_added)
snr = -lin2db(db2lin(-snr) + db2lin(-snr_added))
snr_added = snr_added - lin2db(bw/bw_added)
snr = -lin2db(db2lin(-snr)+db2lin(-snr_added))
return snr
def deltawl2deltaf(delta_wl, wavelength):
""" deltawl2deltaf(delta_wl, wavelength):
delta_wl is BW in wavelength units
@@ -218,16 +175,15 @@ def rrc(ffs, baud_rate, alpha):
Ts = 1 / baud_rate
l_lim = (1 - alpha) / (2 * Ts)
r_lim = (1 + alpha) / (2 * Ts)
hf = zeros(shape(ffs))
slope_inds = where(
logical_and(abs(ffs) > l_lim, abs(ffs) < r_lim))
hf = np.zeros(np.shape(ffs))
slope_inds = np.where(
np.logical_and(np.abs(ffs) > l_lim, np.abs(ffs) < r_lim))
hf[slope_inds] = 0.5 * (1 + cos((pi * Ts / alpha) *
(abs(ffs[slope_inds]) - l_lim)))
p_inds = where(logical_and(abs(ffs) > 0, abs(ffs) < l_lim))
(np.abs(ffs[slope_inds]) - l_lim)))
p_inds = np.where(np.logical_and(np.abs(ffs) > 0, np.abs(ffs) < l_lim))
hf[p_inds] = 1
return sqrt(hf)
def merge_amplifier_restrictions(dict1, dict2):
"""Updates contents of dicts recursively
@@ -250,7 +206,6 @@ def merge_amplifier_restrictions(dict1, dict2):
copy_dict1[key] = dict2[key]
return copy_dict1
def silent_remove(this_list, elem):
"""Remove matching elements from a list without raising ValueError
@@ -268,59 +223,3 @@ def silent_remove(this_list, elem):
except ValueError:
pass
return this_list
def automatic_nch(f_min, f_max, spacing):
"""How many channels are available in the spectrum
:param f_min Lowest frequenecy [Hz]
:param f_max Highest frequency [Hz]
:param spacing Channel width [Hz]
:return Number of uniform channels
>>> automatic_nch(191.325e12, 196.125e12, 50e9)
96
>>> automatic_nch(193.475e12, 193.525e12, 50e9)
1
"""
return int((f_max - f_min) // spacing)
def automatic_fmax(f_min, spacing, nch):
"""Find the high-frequenecy boundary of a spectrum
:param f_min Start of the spectrum (lowest frequency edge) [Hz]
:param spacing Grid/channel spacing [Hz]
:param nch Number of channels
:return End of the spectrum (highest frequency) [Hz]
>>> automatic_fmax(191.325e12, 50e9, 96)
196125000000000.0
"""
return f_min + spacing * nch
def convert_length(value, units):
"""Convert length into basic SI units
>>> convert_length(1, 'km')
1000.0
>>> convert_length(2.0, 'km')
2000.0
>>> convert_length(123, 'm')
123.0
>>> convert_length(123.0, 'm')
123.0
>>> convert_length(42.1, 'km')
42100.0
>>> convert_length(666, 'yards')
Traceback (most recent call last):
...
gnpy.core.exceptions.ConfigurationError: Cannot convert length in "yards" into meters
"""
if units == 'm':
return value * 1e0
elif units == 'km':
return value * 1e3
else:
raise ConfigurationError(f'Cannot convert length in "{units}" into meters')

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,349 +0,0 @@
{
"Edfa": [
{
"type_variety": "openroadm_ila_low_noise",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-8.104e-4, -6.221e-2, -5.889e-1, 37.62],
"pmd": 3e-12,
"pdl": 0.7,
"allowed_for_design": true
},
{
"type_variety": "openroadm_ila_standard",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-5.952e-4, -6.250e-2, -1.071, 28.99],
"pmd": 3e-12,
"pdl": 0.7,
"allowed_for_design": true
},
{
"type_variety": "openroadm_mw_mw_preamp",
"type_def": "openroadm_preamp",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"pmd": 0,
"pdl": 0,
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_booster",
"type_def": "openroadm_booster",
"gain_flatmax": 32,
"gain_min": 0,
"p_max": 22,
"pmd": 0,
"pdl": 0,
"allowed_for_design": false
}
],
"Fiber": [
{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"effective_area": 83e-12,
"pmd_coef": 1.265e-15
},
{
"type_variety": "NZDF",
"dispersion": 0.5e-05,
"effective_area": 72e-12,
"pmd_coef": 1.265e-15
},
{
"type_variety": "LOF",
"dispersion": 2.2e-05,
"effective_area": 125e-12,
"pmd_coef": 1.265e-15
}
],
"RamanFiber": [
{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"effective_area": 83e-12,
"pmd_coef": 1.265e-15
}
],
"Span": [
{
"power_mode": true,
"delta_power_range_db": [0, 0, 0],
"max_fiber_lineic_loss_for_raman": 0.25,
"target_extended_gain": 0,
"max_length": 135,
"length_units": "km",
"max_loss": 28,
"padding": 11,
"EOL": 0,
"con_in": 0,
"con_out": 0
}
],
"Roadm": [
{
"target_pch_out_db": -20,
"add_drop_osnr": 30,
"pmd": 3e-12,
"pdl": 1.5,
"restrictions": {
"preamp_variety_list": ["openroadm_mw_mw_preamp"],
"booster_variety_list": ["openroadm_mw_mw_booster"]
}
}
],
"SI": [
{
"f_min": 191.3e12,
"baud_rate": 31.57e9,
"f_max": 196.1e12,
"spacing": 50e9,
"power_dbm": 2,
"power_range_db": [0, 0, 1],
"roll_off": 0.15,
"tx_osnr": 35,
"sys_margins": 2
}
],
"Transceiver": [
{
"type_variety": "OpenROADM MSA ver. 4.0",
"frequency": {
"min": 191.35e12,
"max": 196.1e12
},
"mode": [
{
"format": "100 Gbit/s, 27.95 Gbaud, DP-QPSK",
"baud_rate": 27.95e9,
"OSNR": 17,
"bit_rate": 100e9,
"roll_off": null,
"tx_osnr": 33,
"penalties": [
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 18e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 30,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
},
{
"pdl": 6,
"penalty_value": 4
}
],
"min_spacing": 50e9,
"cost": 1
},
{
"format": "100 Gbit/s, 31.57 Gbaud, DP-QPSK",
"baud_rate": 31.57e9,
"OSNR": 12,
"bit_rate": 100e9,
"roll_off": 0.15,
"tx_osnr": 35,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 40e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 30,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
},
{
"pdl": 6,
"penalty_value": 4
}
],
"min_spacing": 50e9,
"cost": 1
},
{
"format": "200 Gbit/s, DP-QPSK",
"baud_rate": 63.1e9,
"OSNR": 17,
"bit_rate": 200e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 24e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 25,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
}
],
"min_spacing": 87.5e9,
"cost": 1
},
{
"format": "300 Gbit/s, DP-8QAM",
"baud_rate": 63.1e9,
"OSNR": 21,
"bit_rate": 300e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 18e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 25,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
}
],
"min_spacing": 87.5e9,
"cost": 1
},
{
"format": "400 Gbit/s, DP-16QAM",
"baud_rate": 63.1e9,
"OSNR": 24,
"bit_rate": 400e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 12e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 20,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
}
],
"min_spacing": 87.5e9,
"cost": 1
}
]
}
]
}

View File

@@ -1,409 +0,0 @@
{
"Edfa": [
{
"type_variety": "openroadm_ila_low_noise",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-8.104e-4, -6.221e-2, -5.889e-1, 37.62],
"pmd": 3e-12,
"pdl": 0.7,
"allowed_for_design": true
},
{
"type_variety": "openroadm_ila_standard",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-5.952e-4, -6.250e-2, -1.071, 28.99],
"pmd": 3e-12,
"pdl": 0.7,
"allowed_for_design": true
},
{
"type_variety": "openroadm_mw_mw_preamp_typical_ver5",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-5.952e-4, -6.250e-2, -1.071, 28.99],
"pmd": 0,
"pdl": 0,
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_preamp_worstcase_ver5",
"type_def": "openroadm",
"gain_flatmax": 27,
"gain_min": 0,
"p_max": 22,
"nf_coef": [-5.952e-4, -6.250e-2, -1.071, 27.99],
"pmd": 0,
"pdl": 0,
"allowed_for_design": false
},
{
"type_variety": "openroadm_mw_mw_booster",
"type_def": "openroadm_booster",
"gain_flatmax": 32,
"gain_min": 0,
"p_max": 22,
"pmd": 0,
"pdl": 0,
"allowed_for_design": false
}
],
"Fiber": [
{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"effective_area": 83e-12,
"pmd_coef": 1.265e-15
},
{
"type_variety": "NZDF",
"dispersion": 0.5e-05,
"effective_area": 72e-12,
"pmd_coef": 1.265e-15
},
{
"type_variety": "LOF",
"dispersion": 2.2e-05,
"effective_area": 125e-12,
"pmd_coef": 1.265e-15
}
],
"RamanFiber": [
{
"type_variety": "SSMF",
"dispersion": 1.67e-05,
"effective_area": 83e-12,
"pmd_coef": 1.265e-15
}
],
"Span": [
{
"power_mode": true,
"delta_power_range_db": [0, 0, 0],
"max_fiber_lineic_loss_for_raman": 0.25,
"target_extended_gain": 0,
"max_length": 135,
"length_units": "km",
"max_loss": 28,
"padding": 11,
"EOL": 0,
"con_in": 0,
"con_out": 0
}
],
"Roadm": [
{
"target_pch_out_db": -20,
"add_drop_osnr": 33,
"pmd": 3e-12,
"pdl": 1.5,
"restrictions": {
"preamp_variety_list": ["openroadm_mw_mw_preamp_worstcase_ver5"],
"booster_variety_list": ["openroadm_mw_mw_booster"]
}
}
],
"SI": [
{
"f_min": 191.3e12,
"baud_rate": 31.57e9,
"f_max": 196.1e12,
"spacing": 50e9,
"power_dbm": 2,
"power_range_db": [0, 0, 1],
"roll_off": 0.15,
"tx_osnr": 35,
"sys_margins": 2
}
],
"Transceiver": [
{
"type_variety": "OpenROADM MSA ver. 5.0",
"frequency": {
"min": 191.35e12,
"max": 196.1e12
},
"mode": [
{
"format": "100 Gbit/s, 27.95 Gbaud, DP-QPSK",
"baud_rate": 27.95e9,
"OSNR": 17,
"bit_rate": 100e9,
"roll_off": null,
"tx_osnr": 33,
"penalties": [
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 18e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 30,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
},
{
"pdl": 6,
"penalty_value": 4
}
],
"min_spacing": 50e9,
"cost": 1
},
{
"format": "100 Gbit/s, 31.57 Gbaud, DP-QPSK",
"baud_rate": 31.57e9,
"OSNR": 12,
"bit_rate": 100e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 48e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 30,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
},
{
"pdl": 6,
"penalty_value": 4
}
],
"min_spacing": 50e9,
"cost": 1
},
{
"format": "200 Gbit/s, 31.57 Gbaud, DP-16QAM",
"baud_rate": 31.57e9,
"OSNR": 20.5,
"bit_rate": 100e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 24e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 30,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
},
{
"pdl": 6,
"penalty_value": 4
}
],
"min_spacing": 50e9,
"cost": 1
},
{
"format": "200 Gbit/s, DP-QPSK",
"baud_rate": 63.1e9,
"OSNR": 17,
"bit_rate": 200e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 24e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 25,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
}
],
"min_spacing": 87.5e9,
"cost": 1
},
{
"format": "300 Gbit/s, DP-8QAM",
"baud_rate": 63.1e9,
"OSNR": 21,
"bit_rate": 300e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 18e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 25,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
}
],
"min_spacing": 87.5e9,
"cost": 1
},
{
"format": "400 Gbit/s, DP-16QAM",
"baud_rate": 63.1e9,
"OSNR": 24,
"bit_rate": 400e9,
"roll_off": 0.15,
"tx_osnr": 36,
"penalties": [
{
"chromatic_dispersion": -1e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 4e3,
"penalty_value": 0
},
{
"chromatic_dispersion": 12e3,
"penalty_value": 0.5
},
{
"pmd": 10,
"penalty_value": 0
},
{
"pmd": 20,
"penalty_value": 0.5
},
{
"pdl": 1,
"penalty_value": 0.5
},
{
"pdl": 2,
"penalty_value": 1
},
{
"pdl": 4,
"penalty_value": 2.5
}
],
"min_spacing": 87.5e9,
"cost": 1
}
]
}
]
}

View File

@@ -1,13 +0,0 @@
{
"raman_params": {
"flag": true,
"result_spatial_resolution": 10e3,
"solver_spatial_resolution": 50
},
"nli_params": {
"method": "ggn_spectrally_separated",
"dispersion_tolerance": 1,
"phase_shift_tolerance": 0.1,
"computed_channels": [1, 18, 37, 56, 75]
}
}

View File

@@ -1,304 +0,0 @@
{
"nf_fit_coeff": [
0.000168241,
0.0469961,
0.0359549,
5.82851
],
"f_min": 191.35e12,
"f_max": 196.1e12,
"nf_ripple": [
0.4372876328262819,
0.4372876328262819,
0.41270842850729195,
0.38814205928193013,
0.36358851509924695,
0.3390191214858807,
0.30474360397422756,
0.27048596623174515,
0.23624619427167134,
0.202035284929368,
0.1694483010211072,
0.13687829834471027,
0.1043252636301016,
0.07184040799914815,
0.061288823415841555,
0.050742731588695494,
0.04020212822983975,
0.029667009055877668,
0.01913736978785662,
0.00861320615127981,
-0.010157321677553965,
-0.028982516728038848,
-0.04779792991567815,
-0.06660356886269536,
-0.06256260169582961,
-0.05832916277634124,
-0.05409792133358102,
-0.04990610405914272,
-0.05078533294804249,
-0.05166410580536087,
-0.05254242298580185,
-0.05342028484370278,
-0.051742390657545205,
-0.050039429413028365,
-0.048337350303318156,
-0.04663615264317309,
-0.04493583574805963,
-0.043236398934156144,
-0.035622012697103154,
-0.027999803010447587,
-0.02038153550619876,
-0.012779471908040341,
-0.006436207679519103,
-9.622162373026585e-05,
0.006240488799898697,
0.012573926129294415,
0.021418708618354456,
0.030289222542492025,
0.03915515813685565,
0.047899419704645264,
0.04256372893215024,
0.03723078993416436,
0.03190060058247842,
0.02657315875107553,
0.021248462316134083,
0.01605877647020772,
0.02326948274513522,
0.03047647598902483,
0.037679759069084225,
0.044883315610536455,
0.052470799141237305,
0.06005437964543287,
0.0676340601339394,
0.07521193198077789,
0.08415906712621996,
0.09310160456603413,
0.1020395478432815,
0.11079585523492333,
0.1018180306253394,
0.09284481475528361,
0.0838762040768461,
0.07482015390297145,
0.05670549786742816,
0.03860013139908377,
0.020504047353947653,
0.0024172385953583004,
-0.015660302006048,
-0.03372858157230583,
-0.07037375788020579,
-0.10709599992470213,
-0.14379944379052215,
-0.18048410390821285,
-0.20911178784023846,
-0.23772399031437283,
-0.26632156113294336,
-0.2949045115165272,
-0.30206775396360075,
-0.30915729645781326,
-0.31624321721895354,
-0.3233255190215882,
-0.32037911876162584,
-0.3172854168606314,
-0.31419329378173544,
-0.31110274831665313,
-0.3110761646066259,
-0.3110761646066259
],
"dgt": [
1.0,
1.017807767853702,
1.0356155337864215,
1.0534217504465226,
1.0712204022764056,
1.0895983485572227,
1.108555289615659,
1.1280891949729075,
1.1476135933863398,
1.1672278304018044,
1.1869318618366975,
1.2067249615595257,
1.2264996957264114,
1.2428104897182262,
1.2556591482982988,
1.2650555289898042,
1.2744470198196236,
1.2838336236692311,
1.2932153453410835,
1.3040618749785347,
1.316383926863083,
1.3301807335621048,
1.3439818461440451,
1.3598972673004606,
1.3779439775587023,
1.3981208704326855,
1.418273806730323,
1.4340878115214444,
1.445565137158368,
1.45273959485914,
1.4599103316162523,
1.4670307626366115,
1.474100442252211,
1.48111939735681,
1.488134243479226,
1.495145456062699,
1.502153039909686,
1.5097346239790443,
1.5178910621476225,
1.5266220576235803,
1.5353620432989845,
1.545374152761467,
1.5566577309558969,
1.569199764184379,
1.5817353179379183,
1.5986915141218316,
1.6201194134191075,
1.6460167077689267,
1.6719047669939942,
1.6918150918099673,
1.7057507692361864,
1.7137640932265894,
1.7217732861435076,
1.7297783508684146,
1.737780757913635,
1.7459181197626403,
1.7541903672600494,
1.7625959636196327,
1.7709972329654864,
1.7793941781790852,
1.7877868031023945,
1.7961751115773796,
1.8045606557581335,
1.8139629377087627,
1.824381436842932,
1.835814081380705,
1.847275503201129,
1.862235672444246,
1.8806927939516411,
1.9026104247588487,
1.9245345552113182,
1.9482128147680253,
1.9736443063300082,
2.0008103857988204,
2.0279625371819305,
2.055100772005235,
2.082225099873648,
2.1183028432496016,
2.16337565384239,
2.2174389328192197,
2.271520771371253,
2.322373696229342,
2.3699990328716107,
2.414398437185221,
2.4587748041127506,
2.499446286796604,
2.5364027376452056,
2.5696460593920065,
2.602860350286428,
2.630396440815385,
2.6521732021128046,
2.6681935771243177,
2.6841217449620203,
2.6947834587664494,
2.705443819238505,
2.714526681131686
],
"gain_ripple": [
0.07704745697916238,
0.06479749697916048,
0.05257029697916238,
0.040326236979161934,
0.028098946979159933,
0.01393231697916164,
-0.0021726530208390216,
-0.01819858302084043,
-0.03218106302083967,
-0.042428283020839785,
-0.05095282302083959,
-0.05947139302083926,
-0.06968090302083851,
-0.07844600302084004,
-0.08407607302083875,
-0.0865687230208394,
-0.08906007302083907,
-0.0913487130208388,
-0.09343261302083761,
-0.09717347302083823,
-0.1027863830208382,
-0.11089282302084058,
-0.11963431302083904,
-0.1279646530208396,
-0.13525493302083902,
-0.1409032730208395,
-0.14591937302083835,
-0.14823350302084037,
-0.1484450830208388,
-0.1455411330208385,
-0.14160178302083892,
-0.1353792530208402,
-0.12789859302083784,
-0.11916081302083725,
-0.11041488302083735,
-0.10103437302083762,
-0.09101254302083817,
-0.07868024302083754,
-0.06468462302083822,
-0.051112303020840244,
-0.039618433020837784,
-0.028748483020837767,
-0.016475303020840215,
-0.006936193020838033,
-0.0015763130208377163,
0.0007104669791608842,
0.0040435869791615175,
0.006965146979162284,
0.00842583697916055,
0.00874012697916271,
0.00936596697916059,
0.01030063697916006,
0.011234826979162449,
0.013321846979160057,
0.01659282697915998,
0.023488786979161347,
0.03285456697916089,
0.04072968697916224,
0.04467697697916151,
0.04551704697916037,
0.04717897697916129,
0.04946107697915991,
0.05154489697916276,
0.05447361697916264,
0.05848224697916038,
0.06916723697916183,
0.08548825697916129,
0.10802383697916085,
0.13114358697916018,
0.15216302697916007,
0.17037189697916233,
0.1767381569791624,
0.1739275269791598,
0.15945681697916214,
0.14239527697916188,
0.12276252697916235,
0.10313984697916112,
0.08731066697916035,
0.07533675697916209,
0.07114372697916238,
0.07094413697916124,
0.07091459697916136,
0.0670723869791594,
0.054956336979159914,
0.038328296979159404,
0.017572956979162058,
-0.0028138630208403015,
-0.016792253020838643,
-0.0246928330208398,
-0.018326963020840026,
-0.0036199830208403228,
0.02602813697916062,
0.06245819697916133,
0.09542181697916163,
0.11822862697916037,
0.1359703369791596
]
}

View File

@@ -1,5 +0,0 @@
'''
Processing of data via :py:mod:`.json_io`.
Utilities for Excel conversion in :py:mod:`.convert` and :py:mod:`.service_sheet`.
Example code in :py:mod:`.cli_examples` and :py:mod:`.plots`.
'''

View File

@@ -1,443 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
gnpy.tools.cli_examples
=======================
Common code for CLI examples
'''
import argparse
import logging
import sys
from math import ceil
from numpy import linspace, mean
from pathlib import Path
import gnpy.core.ansi_escapes as ansi_escapes
from gnpy.core.elements import Transceiver, Fiber, RamanFiber
from gnpy.core.equipment import trx_mode_params
import gnpy.core.exceptions as exceptions
from gnpy.core.network import build_network
from gnpy.core.parameters import SimParams
from gnpy.core.utils import db2lin, lin2db, automatic_nch
from gnpy.topology.request import (ResultElement, jsontocsv, compute_path_dsjctn, requests_aggregation,
BLOCKING_NOPATH, correct_json_route_list,
deduplicate_disjunctions, compute_path_with_disjunction,
PathRequest, compute_constrained_path, propagate)
from gnpy.topology.spectrum_assignment import build_oms_list, pth_assign_spectrum
from gnpy.tools.json_io import load_equipment, load_network, load_json, load_requests, save_network, \
requests_from_json, disjunctions_from_json, save_json
from gnpy.tools.plots import plot_baseline, plot_results
_logger = logging.getLogger(__name__)
_examples_dir = Path(__file__).parent.parent / 'example-data'
_help_footer = '''
This program is part of GNPy, https://github.com/TelecomInfraProject/oopt-gnpy
Learn more at https://gnpy.readthedocs.io/
'''
_help_fname_json = 'FILE.json'
_help_fname_json_csv = 'FILE.(json|csv)'
def show_example_data_dir():
print(f'{_examples_dir}/')
def load_common_data(equipment_filename, topology_filename, simulation_filename, save_raw_network_filename):
'''Load common configuration from JSON files'''
try:
equipment = load_equipment(equipment_filename)
network = load_network(topology_filename, equipment)
if save_raw_network_filename is not None:
save_network(network, save_raw_network_filename)
print(f'{ansi_escapes.blue}Raw network (no optimizations) saved to {save_raw_network_filename}{ansi_escapes.reset}')
if not simulation_filename:
sim_params = {}
if next((node for node in network if isinstance(node, RamanFiber)), None) is not None:
print(f'{ansi_escapes.red}Invocation error:{ansi_escapes.reset} '
f'RamanFiber requires passing simulation params via --sim-params')
sys.exit(1)
else:
sim_params = load_json(simulation_filename)
SimParams.set_params(sim_params)
except exceptions.EquipmentConfigError as e:
print(f'{ansi_escapes.red}Configuration error in the equipment library:{ansi_escapes.reset} {e}')
sys.exit(1)
except exceptions.NetworkTopologyError as e:
print(f'{ansi_escapes.red}Invalid network definition:{ansi_escapes.reset} {e}')
sys.exit(1)
except exceptions.ParametersError as e:
print(f'{ansi_escapes.red}Simulation parameters error:{ansi_escapes.reset} {e}')
sys.exit(1)
except exceptions.ConfigurationError as e:
print(f'{ansi_escapes.red}Configuration error:{ansi_escapes.reset} {e}')
sys.exit(1)
except exceptions.ServiceError as e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {e}')
sys.exit(1)
return (equipment, network)
def _setup_logging(args):
logging.basicConfig(level={2: logging.DEBUG, 1: logging.INFO, 0: logging.CRITICAL}.get(args.verbose, logging.DEBUG))
def _add_common_options(parser: argparse.ArgumentParser, network_default: Path):
parser.add_argument('topology', nargs='?', type=Path, metavar='NETWORK-TOPOLOGY.(json|xls|xlsx)',
default=network_default,
help='Input network topology')
parser.add_argument('-v', '--verbose', action='count', default=0,
help='Increase verbosity (can be specified several times)')
parser.add_argument('-e', '--equipment', type=Path, metavar=_help_fname_json,
default=_examples_dir / 'eqpt_config.json', help='Equipment library')
parser.add_argument('--sim-params', type=Path, metavar=_help_fname_json,
default=None, help='Path to the JSON containing simulation parameters (required for Raman). '
f'Example: {_examples_dir / "sim_params.json"}')
parser.add_argument('--save-network', type=Path, metavar=_help_fname_json,
help='Save the final network as a JSON file')
parser.add_argument('--save-network-before-autodesign', type=Path, metavar=_help_fname_json,
help='Dump the network into a JSON file prior to autodesign')
parser.add_argument('--no-insert-edfas', action='store_true',
help='Disable insertion of EDFAs after ROADMs and fibers '
'as well as splitting of fibers by auto-design.')
def transmission_main_example(args=None):
parser = argparse.ArgumentParser(
description='Send a full spectrum load through the network from point A to point B',
epilog=_help_footer,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
_add_common_options(parser, network_default=_examples_dir / 'edfa_example_network.json')
parser.add_argument('--show-channels', action='store_true', help='Show final per-channel OSNR and GSNR summary')
parser.add_argument('-pl', '--plot', action='store_true')
parser.add_argument('-l', '--list-nodes', action='store_true', help='list all transceiver nodes')
parser.add_argument('-po', '--power', default=0, help='channel ref power in dBm')
parser.add_argument('source', nargs='?', help='source node')
parser.add_argument('destination', nargs='?', help='destination node')
args = parser.parse_args(args if args is not None else sys.argv[1:])
_setup_logging(args)
(equipment, network) = load_common_data(args.equipment, args.topology, args.sim_params, args.save_network_before_autodesign)
if args.plot:
plot_baseline(network)
transceivers = {n.uid: n for n in network.nodes() if isinstance(n, Transceiver)}
if not transceivers:
sys.exit('Network has no transceivers!')
if len(transceivers) < 2:
sys.exit('Network has only one transceiver!')
if args.list_nodes:
for uid in transceivers:
print(uid)
sys.exit()
# First try to find exact match if source/destination provided
if args.source:
source = transceivers.pop(args.source, None)
valid_source = True if source else False
else:
source = None
_logger.info('No source node specified: picking random transceiver')
if args.destination:
destination = transceivers.pop(args.destination, None)
valid_destination = True if destination else False
else:
destination = None
_logger.info('No destination node specified: picking random transceiver')
# If no exact match try to find partial match
if args.source and not source:
# TODO code a more advanced regex to find nodes match
source = next((transceivers.pop(uid) for uid in transceivers
if args.source.lower() in uid.lower()), None)
if args.destination and not destination:
# TODO code a more advanced regex to find nodes match
destination = next((transceivers.pop(uid) for uid in transceivers
if args.destination.lower() in uid.lower()), None)
# If no partial match or no source/destination provided pick random
if not source:
source = list(transceivers.values())[0]
del transceivers[source.uid]
if not destination:
destination = list(transceivers.values())[0]
_logger.info(f'source = {args.source!r}')
_logger.info(f'destination = {args.destination!r}')
params = {}
params['request_id'] = 0
params['trx_type'] = ''
params['trx_mode'] = ''
params['source'] = source.uid
params['destination'] = destination.uid
params['bidir'] = False
params['nodes_list'] = [destination.uid]
params['loose_list'] = ['strict']
params['format'] = ''
params['path_bandwidth'] = 0
params['effective_freq_slot'] = None
trx_params = trx_mode_params(equipment)
if args.power:
trx_params['power'] = db2lin(float(args.power)) * 1e-3
params.update(trx_params)
req = PathRequest(**params)
power_mode = equipment['Span']['default'].power_mode
print('\n'.join([f'Power mode is set to {power_mode}',
f'=> it can be modified in eqpt_config.json - Span']))
pref_ch_db = lin2db(req.power * 1e3) # reference channel power / span (SL=20dB)
pref_total_db = pref_ch_db + lin2db(req.nb_channel) # reference total power / span (SL=20dB)
try:
build_network(network, equipment, pref_ch_db, pref_total_db, args.no_insert_edfas)
except exceptions.NetworkTopologyError as e:
print(f'{ansi_escapes.red}Invalid network definition:{ansi_escapes.reset} {e}')
sys.exit(1)
except exceptions.ConfigurationError as e:
print(f'{ansi_escapes.red}Configuration error:{ansi_escapes.reset} {e}')
sys.exit(1)
path = compute_constrained_path(network, req)
spans = [s.params.length for s in path if isinstance(s, RamanFiber) or isinstance(s, Fiber)]
print(f'\nThere are {len(spans)} fiber spans over {sum(spans)/1000:.0f} km between {source.uid} '
f'and {destination.uid}')
print(f'\nNow propagating between {source.uid} and {destination.uid}:')
power_range = [0]
if power_mode:
# power cannot be changed in gain mode
try:
p_start, p_stop, p_step = equipment['SI']['default'].power_range_db
p_num = abs(int(round((p_stop - p_start) / p_step))) + 1 if p_step != 0 else 1
power_range = list(linspace(p_start, p_stop, p_num))
except TypeError:
print('invalid power range definition in eqpt_config, should be power_range_db: [lower, upper, step]')
for dp_db in power_range:
req.power = db2lin(pref_ch_db + dp_db) * 1e-3
if power_mode:
print(f'\nPropagating with input power = {ansi_escapes.cyan}{lin2db(req.power*1e3):.2f} dBm{ansi_escapes.reset}:')
else:
print(f'\nPropagating in {ansi_escapes.cyan}gain mode{ansi_escapes.reset}: power cannot be set manually')
infos = propagate(path, req, equipment)
if len(power_range) == 1:
for elem in path:
print(elem)
if power_mode:
print(f'\nTransmission result for input power = {lin2db(req.power*1e3):.2f} dBm:')
else:
print(f'\nTransmission results:')
print(f' Final GSNR (0.1 nm): {ansi_escapes.cyan}{mean(destination.snr_01nm):.02f} dB{ansi_escapes.reset}')
else:
print(path[-1])
if args.save_network is not None:
save_network(network, args.save_network)
print(f'{ansi_escapes.blue}Network (after autodesign) saved to {args.save_network}{ansi_escapes.reset}')
if args.show_channels:
print('\nThe GSNR per channel at the end of the line is:')
print(
'{:>5}{:>26}{:>26}{:>28}{:>28}{:>28}' .format(
'Ch. #',
'Channel frequency (THz)',
'Channel power (dBm)',
'OSNR ASE (signal bw, dB)',
'SNR NLI (signal bw, dB)',
'GSNR (signal bw, dB)'))
for final_carrier, ch_osnr, ch_snr_nl, ch_snr in zip(
infos.carriers, path[-1].osnr_ase, path[-1].osnr_nli, path[-1].snr):
ch_freq = final_carrier.frequency * 1e-12
ch_power = lin2db(final_carrier.power.signal * 1e3)
print(
'{:5}{:26.2f}{:26.2f}{:28.2f}{:28.2f}{:28.2f}' .format(
final_carrier.channel_number, round(
ch_freq, 2), round(
ch_power, 2), round(
ch_osnr, 2), round(
ch_snr_nl, 2), round(
ch_snr, 2)))
if not args.source:
print(f'\n(No source node specified: picked {source.uid})')
elif not valid_source:
print(f'\n(Invalid source node {args.source!r} replaced with {source.uid})')
if not args.destination:
print(f'\n(No destination node specified: picked {destination.uid})')
elif not valid_destination:
print(f'\n(Invalid destination node {args.destination!r} replaced with {destination.uid})')
if args.plot:
plot_results(network, path, source, destination)
def _path_result_json(pathresult):
return {'response': [n.json for n in pathresult]}
def path_requests_run(args=None):
parser = argparse.ArgumentParser(
description='Compute performance for a list of services provided in a json file or an excel sheet',
epilog=_help_footer,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
_add_common_options(parser, network_default=_examples_dir / 'meshTopologyExampleV2.xls')
parser.add_argument('service_filename', nargs='?', type=Path, metavar='SERVICES-REQUESTS.(json|xls|xlsx)',
default=_examples_dir / 'meshTopologyExampleV2.xls',
help='Input service file')
parser.add_argument('-bi', '--bidir', action='store_true',
help='considers that all demands are bidir')
parser.add_argument('-o', '--output', type=Path, metavar=_help_fname_json_csv,
help='Store satisifed requests into a JSON or CSV file')
args = parser.parse_args(args if args is not None else sys.argv[1:])
_setup_logging(args)
_logger.info(f'Computing path requests {args.service_filename} into JSON format')
(equipment, network) = load_common_data(args.equipment, args.topology, args.sim_params, args.save_network_before_autodesign)
# Build the network once using the default power defined in SI in eqpt config
# TODO power density: db2linp(ower_dbm": 0)/power_dbm": 0 * nb channels as defined by
# spacing, f_min and f_max
p_db = equipment['SI']['default'].power_dbm
p_total_db = p_db + lin2db(automatic_nch(equipment['SI']['default'].f_min,
equipment['SI']['default'].f_max, equipment['SI']['default'].spacing))
try:
build_network(network, equipment, p_db, p_total_db, args.no_insert_edfas)
except exceptions.NetworkTopologyError as e:
print(f'{ansi_escapes.red}Invalid network definition:{ansi_escapes.reset} {e}')
sys.exit(1)
except exceptions.ConfigurationError as e:
print(f'{ansi_escapes.red}Configuration error:{ansi_escapes.reset} {e}')
sys.exit(1)
if args.save_network is not None:
save_network(network, args.save_network)
print(f'{ansi_escapes.blue}Network (after autodesign) saved to {args.save_network}{ansi_escapes.reset}')
oms_list = build_oms_list(network, equipment)
try:
data = load_requests(args.service_filename, equipment, bidir=args.bidir,
network=network, network_filename=args.topology)
rqs = requests_from_json(data, equipment)
except exceptions.ServiceError as e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {e}')
sys.exit(1)
# check that request ids are unique. Non unique ids, may
# mess the computation: better to stop the computation
all_ids = [r.request_id for r in rqs]
if len(all_ids) != len(set(all_ids)):
for item in list(set(all_ids)):
all_ids.remove(item)
msg = f'Requests id {all_ids} are not unique'
_logger.critical(msg)
sys.exit()
rqs = correct_json_route_list(network, rqs)
# pths = compute_path(network, equipment, rqs)
dsjn = disjunctions_from_json(data)
print(f'{ansi_escapes.blue}List of disjunctions{ansi_escapes.reset}')
print(dsjn)
# need to warn or correct in case of wrong disjunction form
# disjunction must not be repeated with same or different ids
dsjn = deduplicate_disjunctions(dsjn)
# Aggregate demands with same exact constraints
print(f'{ansi_escapes.blue}Aggregating similar requests{ansi_escapes.reset}')
rqs, dsjn = requests_aggregation(rqs, dsjn)
# TODO export novel set of aggregated demands in a json file
print(f'{ansi_escapes.blue}The following services have been requested:{ansi_escapes.reset}')
print(rqs)
print(f'{ansi_escapes.blue}Computing all paths with constraints{ansi_escapes.reset}')
try:
pths = compute_path_dsjctn(network, equipment, rqs, dsjn)
except exceptions.DisjunctionError as this_e:
print(f'{ansi_escapes.red}Disjunction error:{ansi_escapes.reset} {this_e}')
sys.exit(1)
print(f'{ansi_escapes.blue}Propagating on selected path{ansi_escapes.reset}')
propagatedpths, reversed_pths, reversed_propagatedpths = compute_path_with_disjunction(network, equipment, rqs, pths)
# Note that deepcopy used in compute_path_with_disjunction returns
# a list of nodes which are not belonging to network (they are copies of the node objects).
# so there can not be propagation on these nodes.
pth_assign_spectrum(pths, rqs, oms_list, reversed_pths)
print(f'{ansi_escapes.blue}Result summary{ansi_escapes.reset}')
header = ['req id', ' demand', ' GSNR@bandwidth A-Z (Z-A)', ' GSNR@0.1nm A-Z (Z-A)',
' Receiver minOSNR', ' mode', ' Gbit/s', ' nb of tsp pairs',
'N,M or blocking reason']
data = []
data.append(header)
for i, this_p in enumerate(propagatedpths):
rev_pth = reversed_propagatedpths[i]
if rev_pth and this_p:
psnrb = f'{round(mean(this_p[-1].snr),2)} ({round(mean(rev_pth[-1].snr),2)})'
psnr = f'{round(mean(this_p[-1].snr_01nm), 2)}' +\
f' ({round(mean(rev_pth[-1].snr_01nm),2)})'
elif this_p:
psnrb = f'{round(mean(this_p[-1].snr),2)}'
psnr = f'{round(mean(this_p[-1].snr_01nm),2)}'
try:
if rqs[i].blocking_reason in BLOCKING_NOPATH:
line = [f'{rqs[i].request_id}', f' {rqs[i].source} to {rqs[i].destination} :',
f'-', f'-', f'-', f'{rqs[i].tsp_mode}', f'{round(rqs[i].path_bandwidth * 1e-9,2)}',
f'-', f'{rqs[i].blocking_reason}']
else:
line = [f'{rqs[i].request_id}', f' {rqs[i].source} to {rqs[i].destination} : ', psnrb,
psnr, f'-', f'{rqs[i].tsp_mode}', f'{round(rqs[i].path_bandwidth * 1e-9, 2)}',
f'-', f'{rqs[i].blocking_reason}']
except AttributeError:
line = [f'{rqs[i].request_id}', f' {rqs[i].source} to {rqs[i].destination} : ', psnrb,
psnr, f'{rqs[i].OSNR + equipment["SI"]["default"].sys_margins}',
f'{rqs[i].tsp_mode}', f'{round(rqs[i].path_bandwidth * 1e-9,2)}',
f'{ceil(rqs[i].path_bandwidth / rqs[i].bit_rate) }', f'({rqs[i].N},{rqs[i].M})']
data.append(line)
col_width = max(len(word) for row in data for word in row[2:]) # padding
firstcol_width = max(len(row[0]) for row in data) # padding
secondcol_width = max(len(row[1]) for row in data) # padding
for row in data:
firstcol = ''.join(row[0].ljust(firstcol_width))
secondcol = ''.join(row[1].ljust(secondcol_width))
remainingcols = ''.join(word.center(col_width, ' ') for word in row[2:])
print(f'{firstcol} {secondcol} {remainingcols}')
print(f'{ansi_escapes.yellow}Result summary shows mean GSNR and OSNR (average over all channels){ansi_escapes.reset}')
if args.output:
result = []
# assumes that list of rqs and list of propgatedpths have same order
for i, pth in enumerate(propagatedpths):
result.append(ResultElement(rqs[i], pth, reversed_propagatedpths[i]))
temp = _path_result_json(result)
if args.output.suffix.lower() == '.json':
save_json(temp, args.output)
print(f'{ansi_escapes.blue}Saved JSON to {args.output}{ansi_escapes.reset}')
elif args.output.suffix.lower() == '.csv':
with open(args.output, "w", encoding='utf-8') as fcsv:
jsontocsv(temp, equipment, fcsv)
print(f'{ansi_escapes.blue}Saved CSV to {args.output}{ansi_escapes.reset}')
else:
print(f'{ansi_escapes.red}Cannot save output: neither JSON nor CSV file{ansi_escapes.reset}')
sys.exit(1)

View File

@@ -1,830 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
gnpy.tools.convert
==================
This module contains utilities for converting between XLS and JSON.
The input XLS file must contain sheets named "Nodes" and "Links".
It may optionally contain a sheet named "Eqpt".
In the "Nodes" sheet, only the "City" column is mandatory. The column "Type"
can be determined automatically given the topology (e.g., if degree 2, ILA;
otherwise, ROADM.) Incorrectly specified types (e.g., ILA for node of
degree ≠ 2) will be automatically corrected.
In the "Links" sheet, only the first three columns ("Node A", "Node Z" and
"east Distance (km)") are mandatory. Missing "west" information is copied from
the "east" information so that it is possible to input undirected data.
"""
from xlrd import open_workbook
from argparse import ArgumentParser
from collections import namedtuple, Counter, defaultdict
from itertools import chain
from json import dumps
from pathlib import Path
from copy import copy
from gnpy.core import ansi_escapes
from gnpy.core.utils import silent_remove
from gnpy.core.exceptions import NetworkTopologyError
from gnpy.core.elements import Edfa, Fused, Fiber
def all_rows(sh, start=0):
return (sh.row(x) for x in range(start, sh.nrows))
class Node(object):
def __init__(self, **kwargs):
super(Node, self).__init__()
self.update_attr(kwargs)
def update_attr(self, kwargs):
clean_kwargs = {k: v for k, v in kwargs.items() if v != ''}
for k, v in self.default_values.items():
v = clean_kwargs.get(k, v)
setattr(self, k, v)
default_values = {
'city': '',
'state': '',
'country': '',
'region': '',
'latitude': 0,
'longitude': 0,
'node_type': 'ILA',
'booster_restriction': '',
'preamp_restriction': ''
}
class Link(object):
"""attribtes from west parse_ept_headers dict
+node_a, node_z, west_fiber_con_in, east_fiber_con_in
"""
def __init__(self, **kwargs):
super(Link, self).__init__()
self.update_attr(kwargs)
self.distance_units = 'km'
def update_attr(self, kwargs):
clean_kwargs = {k: v for k, v in kwargs.items() if v != ''}
for k, v in self.default_values.items():
v = clean_kwargs.get(k, v)
setattr(self, k, v)
k = 'west' + k.split('east')[-1]
v = clean_kwargs.get(k, v)
setattr(self, k, v)
def __eq__(self, link):
return (self.from_city == link.from_city and self.to_city == link.to_city) \
or (self.from_city == link.to_city and self.to_city == link.from_city)
default_values = {
'from_city': '',
'to_city': '',
'east_distance': 80,
'east_fiber': 'SSMF',
'east_lineic': 0.2,
'east_con_in': None,
'east_con_out': None,
'east_pmd': 0.1,
'east_cable': ''
}
class Eqpt(object):
def __init__(self, **kwargs):
super(Eqpt, self).__init__()
self.update_attr(kwargs)
def update_attr(self, kwargs):
clean_kwargs = {k: v for k, v in kwargs.items() if v != ''}
for k, v in self.default_values.items():
v_east = clean_kwargs.get(k, v)
setattr(self, k, v_east)
k = 'west' + k.split('east')[-1]
v_west = clean_kwargs.get(k, v)
setattr(self, k, v_west)
default_values = {
'from_city': '',
'to_city': '',
'east_amp_type': '',
'east_att_in': 0,
'east_amp_gain': None,
'east_amp_dp': None,
'east_tilt': 0,
'east_att_out': None
}
class Roadm(object):
def __init__(self, **kwargs):
super(Roadm, self).__init__()
self.update_attr(kwargs)
def update_attr(self, kwargs):
clean_kwargs = {k: v for k, v in kwargs.items() if v != ''}
for k, v in self.default_values.items():
v = clean_kwargs.get(k, v)
setattr(self, k, v)
default_values = {'from_node': '',
'to_node': '',
'target_pch_out_db': None
}
def read_header(my_sheet, line, slice_):
""" return the list of headers !:= ''
header_i = [(header, header_column_index), ...]
in a {line, slice1_x, slice_y} range
"""
Param_header = namedtuple('Param_header', 'header colindex')
try:
header = [x.value.strip() for x in my_sheet.row_slice(line, slice_[0], slice_[1])]
header_i = [Param_header(header, i + slice_[0]) for i, header in enumerate(header) if header != '']
except Exception:
header_i = []
if header_i != [] and header_i[-1].colindex != slice_[1]:
header_i.append(Param_header('', slice_[1]))
return header_i
def read_slice(my_sheet, line, slice_, header):
"""return the slice range of a given header
in a defined range {line, slice_x, slice_y}"""
header_i = read_header(my_sheet, line, slice_)
slice_range = (-1, -1)
if header_i != []:
try:
slice_range = next((h.colindex, header_i[i + 1].colindex)
for i, h in enumerate(header_i) if header in h.header)
except Exception:
pass
return slice_range
def parse_headers(my_sheet, input_headers_dict, headers, start_line, slice_in):
"""return a dict of header_slice
key = column index
value = header name"""
for h0 in input_headers_dict:
slice_out = read_slice(my_sheet, start_line, slice_in, h0)
iteration = 1
while slice_out == (-1, -1) and iteration < 10:
# try next lines
slice_out = read_slice(my_sheet, start_line + iteration, slice_in, h0)
iteration += 1
if slice_out == (-1, -1):
if h0 in ('east', 'Node A', 'Node Z', 'City'):
print(f'{ansi_escapes.red}CRITICAL{ansi_escapes.reset}: missing _{h0}_ header: EXECUTION ENDS')
raise NetworkTopologyError(f'Missing _{h0}_ header')
else:
print(f'missing header {h0}')
elif not isinstance(input_headers_dict[h0], dict):
headers[slice_out[0]] = input_headers_dict[h0]
else:
headers = parse_headers(my_sheet, input_headers_dict[h0], headers, start_line + 1, slice_out)
if headers == {}:
print(f'{ansi_escapes.red}CRITICAL ERROR{ansi_escapes.reset}: could not find any header to read _ ABORT')
raise NetworkTopologyError('Could not find any header to read')
return headers
def parse_row(row, headers):
return {f: r.value for f, r in
zip([label for label in headers.values()], [row[i] for i in headers])}
def parse_sheet(my_sheet, input_headers_dict, header_line, start_line, column):
headers = parse_headers(my_sheet, input_headers_dict, {}, header_line, (0, column))
for row in all_rows(my_sheet, start=start_line):
yield parse_row(row[0: column], headers)
def _format_items(items):
return '\n'.join(f' - {item}' for item in items)
def sanity_check(nodes, links, nodes_by_city, links_by_city, eqpts_by_city):
duplicate_links = []
for l1 in links:
for l2 in links:
if l1 is not l2 and l1 == l2 and l2 not in duplicate_links:
print(f'\nWARNING\n \
link {l1.from_city}-{l1.to_city} is duplicate \
\nthe 1st duplicate link will be removed but you should check Links sheet input')
duplicate_links.append(l1)
for l in duplicate_links:
links.remove(l)
links_by_city[l.from_city].remove(l)
links_by_city[l.to_city].remove(l)
unreferenced_nodes = [n for n in nodes_by_city if n not in links_by_city]
if unreferenced_nodes:
raise NetworkTopologyError(f'{ansi_escapes.red}XLS error:{ansi_escapes.reset} The following nodes are not '
f'referenced from the {ansi_escapes.cyan}Links{ansi_escapes.reset} sheet. '
f'If unused, remove them from the {ansi_escapes.cyan}Nodes{ansi_escapes.reset} '
f'sheet:\n'
+ _format_items(unreferenced_nodes))
# no need to check "Links" for invalid nodes because that's already in parse_excel()
wrong_eqpt_from = [n for n in eqpts_by_city if n not in nodes_by_city]
wrong_eqpt_to = [n.to_city for destinations in eqpts_by_city.values()
for n in destinations if n.to_city not in nodes_by_city]
wrong_eqpt = wrong_eqpt_from + wrong_eqpt_to
if wrong_eqpt:
raise NetworkTopologyError(f'{ansi_escapes.red}XLS error:{ansi_escapes.reset} '
f'The {ansi_escapes.cyan}Eqpt{ansi_escapes.reset} sheet refers to nodes that '
f'are not defined in the {ansi_escapes.cyan}Nodes{ansi_escapes.reset} sheet:\n'
+ _format_items(wrong_eqpt))
for city, link in links_by_city.items():
if nodes_by_city[city].node_type.lower() == 'ila' and len(link) != 2:
# wrong input: ILA sites can only be Degree 2
# => correct to make it a ROADM and remove entry in links_by_city
# TODO: put in log rather than print
print(f'invalid node type ({nodes_by_city[city].node_type})\
specified in {city}, replaced by ROADM')
nodes_by_city[city].node_type = 'ROADM'
for n in nodes:
if n.city == city:
n.node_type = 'ROADM'
return nodes, links
def create_roadm_element(node, roadms_by_city):
""" create the json element for a roadm node, including the different cases:
- if there are restrictions
- if there are per degree target power defined on a direction
direction is defined by the booster name, so that booster must also be created in eqpt sheet
if the direction is defined in roadm
"""
roadm = {'uid': f'roadm {node.city}'}
if node.preamp_restriction != '' or node.booster_restriction != '':
roadm['params'] = {
'restrictions': {
'preamp_variety_list': silent_remove(node.preamp_restriction.split(' | '), ''),
'booster_variety_list': silent_remove(node.booster_restriction.split(' | '), '')}
}
if node.city in roadms_by_city.keys():
if 'params' not in roadm.keys():
roadm['params'] = {}
roadm['params']['per_degree_pch_out_db'] = {}
for elem in roadms_by_city[node.city]:
to_node = f'east edfa in {node.city} to {elem.to_node}'
if elem.target_pch_out_db is not None:
roadm['params']['per_degree_pch_out_db'][to_node] = elem.target_pch_out_db
roadm['metadata'] = {'location': {'city': node.city,
'region': node.region,
'latitude': node.latitude,
'longitude': node.longitude}}
roadm['type'] = 'Roadm'
return roadm
def create_east_eqpt_element(node):
""" create amplifiers json elements for the east direction.
this includes the case where the case of a fused element defined instead of an
ILA in eqpt sheet
"""
eqpt = {'uid': f'east edfa in {node.from_city} to {node.to_city}',
'metadata': {'location': {'city': nodes_by_city[node.from_city].city,
'region': nodes_by_city[node.from_city].region,
'latitude': nodes_by_city[node.from_city].latitude,
'longitude': nodes_by_city[node.from_city].longitude}}}
if node.east_amp_type.lower() != '' and node.east_amp_type.lower() != 'fused':
eqpt['type'] = 'Edfa'
eqpt['type_variety'] = f'{node.east_amp_type}'
eqpt['operational'] = {'gain_target': node.east_amp_gain,
'delta_p': node.east_amp_dp,
'tilt_target': node.east_tilt,
'out_voa': node.east_att_out}
elif node.east_amp_type.lower() == '':
eqpt['type'] = 'Edfa'
eqpt['operational'] = {'gain_target': node.east_amp_gain,
'delta_p': node.east_amp_dp,
'tilt_target': node.east_tilt,
'out_voa': node.east_att_out}
elif node.east_amp_type.lower() == 'fused':
# fused edfa variety is a hack to indicate that there should not be
# booster amplifier out the roadm.
# If user specifies ILA in Nodes sheet and fused in Eqpt sheet, then assumes that
# this is a fused nodes.
eqpt['type'] = 'Fused'
eqpt['params'] = {'loss': 0}
return eqpt
def create_west_eqpt_element(node):
""" create amplifiers json elements for the west direction.
this includes the case where the case of a fused element defined instead of an
ILA in eqpt sheet
"""
eqpt = {'uid': f'west edfa in {node.from_city} to {node.to_city}',
'metadata': {'location': {'city': nodes_by_city[node.from_city].city,
'region': nodes_by_city[node.from_city].region,
'latitude': nodes_by_city[node.from_city].latitude,
'longitude': nodes_by_city[node.from_city].longitude}},
'type': 'Edfa'}
if node.west_amp_type.lower() != '' and node.west_amp_type.lower() != 'fused':
eqpt['type_variety'] = f'{node.west_amp_type}'
eqpt['operational'] = {'gain_target': node.west_amp_gain,
'delta_p': node.west_amp_dp,
'tilt_target': node.west_tilt,
'out_voa': node.west_att_out}
elif node.west_amp_type.lower() == '':
eqpt['operational'] = {'gain_target': node.west_amp_gain,
'delta_p': node.west_amp_dp,
'tilt_target': node.west_tilt,
'out_voa': node.west_att_out}
elif node.west_amp_type.lower() == 'fused':
eqpt['type'] = 'Fused'
eqpt['params'] = {'loss': 0}
return eqpt
def xls_to_json_data(input_filename, filter_region=[]):
nodes, links, eqpts, roadms = parse_excel(input_filename)
if filter_region:
nodes = [n for n in nodes if n.region.lower() in filter_region]
cities = {n.city for n in nodes}
links = [lnk for lnk in links if lnk.from_city in cities and
lnk.to_city in cities]
cities = {lnk.from_city for lnk in links} | {lnk.to_city for lnk in links}
nodes = [n for n in nodes if n.city in cities]
global nodes_by_city
nodes_by_city = {n.city: n for n in nodes}
global links_by_city
links_by_city = defaultdict(list)
for link in links:
links_by_city[link.from_city].append(link)
links_by_city[link.to_city].append(link)
global eqpts_by_city
eqpts_by_city = defaultdict(list)
for eqpt in eqpts:
eqpts_by_city[eqpt.from_city].append(eqpt)
roadms_by_city = defaultdict(list)
for roadm in roadms:
roadms_by_city[roadm.from_node].append(roadm)
nodes, links = sanity_check(nodes, links, nodes_by_city, links_by_city, eqpts_by_city)
return {
'elements':
[{'uid': f'trx {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Transceiver'}
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm'] +
[create_roadm_element(x, roadms_by_city)
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm'] +
[{'uid': f'west fused spans in {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Fused'}
for x in nodes_by_city.values() if x.node_type.lower() == 'fused'] +
[{'uid': f'east fused spans in {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Fused'}
for x in nodes_by_city.values() if x.node_type.lower() == 'fused'] +
[{'uid': f'fiber ({x.from_city} \u2192 {x.to_city})-{x.east_cable}',
'metadata': {'location': midpoint(nodes_by_city[x.from_city],
nodes_by_city[x.to_city])},
'type': 'Fiber',
'type_variety': x.east_fiber,
'params': {'length': round(x.east_distance, 3),
'length_units': x.distance_units,
'loss_coef': x.east_lineic,
'con_in': x.east_con_in,
'con_out': x.east_con_out}
}
for x in links] +
[{'uid': f'fiber ({x.to_city} \u2192 {x.from_city})-{x.west_cable}',
'metadata': {'location': midpoint(nodes_by_city[x.from_city],
nodes_by_city[x.to_city])},
'type': 'Fiber',
'type_variety': x.west_fiber,
'params': {'length': round(x.west_distance, 3),
'length_units': x.distance_units,
'loss_coef': x.west_lineic,
'con_in': x.west_con_in,
'con_out': x.west_con_out}
} for x in links] +
[{'uid': f'west edfa in {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Edfa',
'operational': {'gain_target': None,
'tilt_target': 0}
} for x in nodes_by_city.values() if x.node_type.lower() == 'ila' and x.city not in eqpts_by_city] +
[{'uid': f'east edfa in {x.city}',
'metadata': {'location': {'city': x.city,
'region': x.region,
'latitude': x.latitude,
'longitude': x.longitude}},
'type': 'Edfa',
'operational': {'gain_target': None,
'tilt_target': 0}
} for x in nodes_by_city.values() if x.node_type.lower() == 'ila' and x.city not in eqpts_by_city] +
[create_east_eqpt_element(e) for e in eqpts] +
[create_west_eqpt_element(e) for e in eqpts],
'connections':
list(chain.from_iterable([eqpt_connection_by_city(n.city)
for n in nodes]))
+
list(chain.from_iterable(zip(
[{'from_node': f'trx {x.city}',
'to_node': f'roadm {x.city}'}
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm'],
[{'from_node': f'roadm {x.city}',
'to_node': f'trx {x.city}'}
for x in nodes_by_city.values() if x.node_type.lower() == 'roadm'])))
}
def convert_file(input_filename, filter_region=[], output_json_file_name=None):
data = xls_to_json_data(input_filename, filter_region)
if output_json_file_name is None:
output_json_file_name = input_filename.with_suffix('.json')
with open(output_json_file_name, 'w', encoding='utf-8') as edfa_json_file:
edfa_json_file.write(dumps(data, indent=2, ensure_ascii=False))
edfa_json_file.write('\n') # add end of file newline because json dumps does not.
return output_json_file_name
def corresp_names(input_filename, network):
""" a function that builds the correspondance between names given in the excel,
and names used in the json, and created by the autodesign.
All names are listed
"""
nodes, links, eqpts, roadms = parse_excel(input_filename)
fused = [n.uid for n in network.nodes() if isinstance(n, Fused)]
ila = [n.uid for n in network.nodes() if isinstance(n, Edfa)]
corresp_roadm = {x.city: [f'roadm {x.city}'] for x in nodes
if x.node_type.lower() == 'roadm'}
corresp_fused = {x.city: [f'west fused spans in {x.city}', f'east fused spans in {x.city}']
for x in nodes if x.node_type.lower() == 'fused' and
f'west fused spans in {x.city}' in fused and
f'east fused spans in {x.city}' in fused}
# add the special cases when an ila is changed into a fused
for my_e in eqpts:
name = f'east edfa in {my_e.from_city} to {my_e.to_city}'
if my_e.east_amp_type.lower() == 'fused' and name in fused:
if my_e.from_city in corresp_fused.keys():
corresp_fused[my_e.from_city].append(name)
else:
corresp_fused[my_e.from_city] = [name]
name = f'west edfa in {my_e.from_city} to {my_e.to_city}'
if my_e.west_amp_type.lower() == 'fused' and name in fused:
if my_e.from_city in corresp_fused.keys():
corresp_fused[my_e.from_city].append(name)
else:
corresp_fused[my_e.from_city] = [name]
# build corresp ila based on eqpt sheet
# start with east direction
corresp_ila = {e.from_city: [f'east edfa in {e.from_city} to {e.to_city}']
for e in eqpts if f'east edfa in {e.from_city} to {e.to_city}' in ila}
# west direction, append name or create a new item in dict
for my_e in eqpts:
name = f'west edfa in {my_e.from_city} to {my_e.to_city}'
if name in ila:
if my_e.from_city in corresp_ila.keys():
corresp_ila[my_e.from_city].append(name)
else:
corresp_ila[my_e.from_city] = [name]
# complete with potential autodesign names: amplifiers
for my_l in links:
name = f'Edfa0_fiber ({my_l.to_city} \u2192 {my_l.from_city})-{my_l.west_cable}'
if name in ila:
if my_l.from_city in corresp_ila.keys():
# "east edfa in Stbrieuc to Rennes_STA" is equivalent name as
# "Edfa0_fiber (Lannion_CAS → Stbrieuc)-F056"
# "west edfa in Stbrieuc to Rennes_STA" is equivalent name as
# "Edfa0_fiber (Rennes_STA → Stbrieuc)-F057"
# does not filter names: all types (except boosters) are created.
# in case fibers are splitted the name here is a prefix
corresp_ila[my_l.from_city].append(name)
else:
corresp_ila[my_l.from_city] = [name]
name = f'Edfa0_fiber ({my_l.from_city} \u2192 {my_l.to_city})-{my_l.east_cable}'
if name in ila:
if my_l.to_city in corresp_ila.keys():
corresp_ila[my_l.to_city].append(name)
else:
corresp_ila[my_l.to_city] = [name]
# merge fused with ila:
for key, val in corresp_fused.items():
if key in corresp_ila.keys():
corresp_ila[key].extend(val)
else:
corresp_ila[key] = val
# no need of roadm booster
return corresp_roadm, corresp_fused, corresp_ila
def parse_excel(input_filename):
link_headers = {
'Node A': 'from_city',
'Node Z': 'to_city',
'east': {
'Distance (km)': 'east_distance',
'Fiber type': 'east_fiber',
'lineic att': 'east_lineic',
'Con_in': 'east_con_in',
'Con_out': 'east_con_out',
'PMD': 'east_pmd',
'Cable id': 'east_cable'
},
'west': {
'Distance (km)': 'west_distance',
'Fiber type': 'west_fiber',
'lineic att': 'west_lineic',
'Con_in': 'west_con_in',
'Con_out': 'west_con_out',
'PMD': 'west_pmd',
'Cable id': 'west_cable'
}
}
node_headers = {
'City': 'city',
'State': 'state',
'Country': 'country',
'Region': 'region',
'Latitude': 'latitude',
'Longitude': 'longitude',
'Type': 'node_type',
'Booster_restriction': 'booster_restriction',
'Preamp_restriction': 'preamp_restriction'
}
eqpt_headers = {
'Node A': 'from_city',
'Node Z': 'to_city',
'east': {
'amp type': 'east_amp_type',
'att_in': 'east_att_in',
'amp gain': 'east_amp_gain',
'delta p': 'east_amp_dp',
'tilt': 'east_tilt',
'att_out': 'east_att_out'
},
'west': {
'amp type': 'west_amp_type',
'att_in': 'west_att_in',
'amp gain': 'west_amp_gain',
'delta p': 'west_amp_dp',
'tilt': 'west_tilt',
'att_out': 'west_att_out'
}
}
roadm_headers = {'Node A': 'from_node',
'Node Z': 'to_node',
'per degree target power (dBm)': 'target_pch_out_db'
}
with open_workbook(input_filename) as wb:
nodes_sheet = wb.sheet_by_name('Nodes')
links_sheet = wb.sheet_by_name('Links')
try:
eqpt_sheet = wb.sheet_by_name('Eqpt')
except Exception:
# eqpt_sheet is optional
eqpt_sheet = None
try:
roadm_sheet = wb.sheet_by_name('Roadms')
except Exception:
# roadm_sheet is optional
roadm_sheet = None
nodes = []
for node in parse_sheet(nodes_sheet, node_headers, NODES_LINE, NODES_LINE + 1, NODES_COLUMN):
nodes.append(Node(**node))
expected_node_types = {'ROADM', 'ILA', 'FUSED'}
for n in nodes:
if n.node_type not in expected_node_types:
n.node_type = 'ILA'
links = []
for link in parse_sheet(links_sheet, link_headers, LINKS_LINE, LINKS_LINE + 2, LINKS_COLUMN):
links.append(Link(**link))
eqpts = []
if eqpt_sheet is not None:
for eqpt in parse_sheet(eqpt_sheet, eqpt_headers, EQPTS_LINE, EQPTS_LINE + 2, EQPTS_COLUMN):
eqpts.append(Eqpt(**eqpt))
roadms = []
if roadm_sheet is not None:
for roadm in parse_sheet(roadm_sheet, roadm_headers, ROADMS_LINE, ROADMS_LINE+2, ROADMS_COLUMN):
roadms.append(Roadm(**roadm))
# sanity check
all_cities = Counter(n.city for n in nodes)
if len(all_cities) != len(nodes):
raise ValueError(f'Duplicate city: {all_cities}')
bad_links = []
for lnk in links:
if lnk.from_city not in all_cities or lnk.to_city not in all_cities:
bad_links.append([lnk.from_city, lnk.to_city])
if bad_links:
raise NetworkTopologyError(f'{ansi_escapes.red}XLS error:{ansi_escapes.reset} '
f'The {ansi_escapes.cyan}Links{ansi_escapes.reset} sheet references nodes that '
f'are not defined in the {ansi_escapes.cyan}Nodes{ansi_escapes.reset} sheet:\n'
+ _format_items(f'{item[0]} -> {item[1]}' for item in bad_links))
return nodes, links, eqpts, roadms
def eqpt_connection_by_city(city_name):
other_cities = fiber_dest_from_source(city_name)
subdata = []
if nodes_by_city[city_name].node_type.lower() in {'ila', 'fused'}:
# Then len(other_cities) == 2
direction = ['west', 'east']
for i in range(2):
from_ = fiber_link(other_cities[i], city_name)
in_ = eqpt_in_city_to_city(city_name, other_cities[0], direction[i])
to_ = fiber_link(city_name, other_cities[1 - i])
subdata += connect_eqpt(from_, in_, to_)
elif nodes_by_city[city_name].node_type.lower() == 'roadm':
for other_city in other_cities:
from_ = f'roadm {city_name}'
in_ = eqpt_in_city_to_city(city_name, other_city)
to_ = fiber_link(city_name, other_city)
subdata += connect_eqpt(from_, in_, to_)
from_ = fiber_link(other_city, city_name)
in_ = eqpt_in_city_to_city(city_name, other_city, "west")
to_ = f'roadm {city_name}'
subdata += connect_eqpt(from_, in_, to_)
return subdata
def connect_eqpt(from_, in_, to_):
connections = []
if in_ != '':
connections = [{'from_node': from_, 'to_node': in_},
{'from_node': in_, 'to_node': to_}]
else:
connections = [{'from_node': from_, 'to_node': to_}]
return connections
def eqpt_in_city_to_city(in_city, to_city, direction='east'):
rev_direction = 'west' if direction == 'east' else 'east'
return_eqpt = ''
if in_city in eqpts_by_city:
for e in eqpts_by_city[in_city]:
if nodes_by_city[in_city].node_type.lower() == 'roadm':
if e.to_city == to_city:
return_eqpt = f'{direction} edfa in {e.from_city} to {e.to_city}'
elif nodes_by_city[in_city].node_type.lower() == 'ila':
if e.to_city != to_city:
direction = rev_direction
return_eqpt = f'{direction} edfa in {e.from_city} to {e.to_city}'
elif nodes_by_city[in_city].node_type.lower() == 'ila':
return_eqpt = f'{direction} edfa in {in_city}'
if nodes_by_city[in_city].node_type.lower() == 'fused':
return_eqpt = f'{direction} fused spans in {in_city}'
return return_eqpt
def corresp_next_node(network, corresp_ila, corresp_roadm):
""" for each name in corresp dictionnaries find the next node in network and its name
given by user in excel. for meshTopology_exampleV2.xls:
user ILA name Stbrieuc covers the two direction. convert.py creates 2 different ILA
with possible names (depending on the direction and if the eqpt was defined in eqpt
sheet)
- east edfa in Stbrieuc to Rennes_STA
- west edfa in Stbrieuc to Rennes_STA
- Edfa0_fiber (Lannion_CAS → Stbrieuc)-F056
- Edfa0_fiber (Rennes_STA → Stbrieuc)-F057
next_nodes finds the user defined name of next node to be able to map the path constraints
- east edfa in Stbrieuc to Rennes_STA next node = Rennes_STA
- west edfa in Stbrieuc to Rennes_STA next node Lannion_CAS
Edfa0_fiber (Lannion_CAS → Stbrieuc)-F056 and Edfa0_fiber (Rennes_STA → Stbrieuc)-F057
do not exist
the function supports fiber splitting, fused nodes and shall only be called if
excel format is used for both network and service
"""
next_node = {}
# consolidate tables and create next_node table
for ila_key, ila_list in corresp_ila.items():
temp = copy(ila_list)
for ila_elem in ila_list:
# find the node with ila_elem string _in_ the node uid. 'in' is used instead of
# '==' to find composed nodes due to fiber splitting in autodesign.
# eg if elem_ila is 'Edfa0_fiber (Lannion_CAS → Stbrieuc)-F056',
# node uid 'Edfa0_fiber (Lannion_CAS → Stbrieuc)-F056_(1/2)' is possible
correct_ila_name = next(n.uid for n in network.nodes() if ila_elem in n.uid)
temp.remove(ila_elem)
temp.append(correct_ila_name)
ila_nd = next(n for n in network.nodes() if ila_elem in n.uid)
next_nd = next(network.successors(ila_nd))
# search for the next ILA or ROADM
while isinstance(next_nd, (Fiber, Fused)):
next_nd = next(network.successors(next_nd))
# if next_nd is a ROADM, add the first found correspondance
for key, val in corresp_roadm.items():
# val is a list of possible names associated with key
if next_nd.uid in val:
next_node[correct_ila_name] = key
break
# if next_nd was not already added in the dict with the previous loop,
# add the first found correspondance in ila names
if correct_ila_name not in next_node.keys():
for key, val in corresp_ila.items():
# in case of splitted fibers the ila name might not be exact match
if [e for e in val if e in next_nd.uid]:
next_node[correct_ila_name] = key
break
corresp_ila[ila_key] = temp
return corresp_ila, next_node
def fiber_dest_from_source(city_name):
destinations = []
links_from_city = links_by_city[city_name]
for l in links_from_city:
if l.from_city == city_name:
destinations.append(l.to_city)
else:
destinations.append(l.from_city)
return destinations
def fiber_link(from_city, to_city):
source_dest = (from_city, to_city)
links = links_by_city[from_city]
link = next(l for l in links if l.from_city in source_dest and l.to_city in source_dest)
if link.from_city == from_city:
fiber = f'fiber ({link.from_city} \u2192 {link.to_city})-{link.east_cable}'
else:
fiber = f'fiber ({link.to_city} \u2192 {link.from_city})-{link.west_cable}'
return fiber
def midpoint(city_a, city_b):
lats = city_a.latitude, city_b.latitude
longs = city_a.longitude, city_b.longitude
try:
result = {
'latitude': sum(lats) / 2,
'longitude': sum(longs) / 2
}
except TypeError:
result = {
'latitude': 0,
'longitude': 0
}
return result
# TODO get column size automatically from tupple size
NODES_COLUMN = 10
NODES_LINE = 4
LINKS_COLUMN = 16
LINKS_LINE = 3
EQPTS_LINE = 3
EQPTS_COLUMN = 14
ROADMS_LINE = 3
ROADMS_COLUMN = 3
def _do_convert():
parser = ArgumentParser()
parser.add_argument('workbook', type=Path)
parser.add_argument('-f', '--filter-region', action='append', default=[])
parser.add_argument('--output', type=Path, help='Name of the generated JSON file')
args = parser.parse_args()
res = convert_file(args.workbook, args.filter_region, args.output)
print(f'XLS -> JSON saved to {res}')
if __name__ == '__main__':
_do_convert()

View File

@@ -1,578 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
gnpy.tools.json_io
==================
Loading and saving data from JSON files in GNPy's internal data format
'''
from networkx import DiGraph
from logging import getLogger
from pathlib import Path
import json
from collections import namedtuple
from gnpy.core import ansi_escapes, elements
from gnpy.core.equipment import trx_mode_params
from gnpy.core.exceptions import ConfigurationError, EquipmentConfigError, NetworkTopologyError, ServiceError
from gnpy.core.science_utils import estimate_nf_model
from gnpy.core.utils import automatic_nch, automatic_fmax, merge_amplifier_restrictions
from gnpy.topology.request import PathRequest, Disjunction, compute_spectrum_slot_vs_bandwidth
from gnpy.tools.convert import xls_to_json_data
from gnpy.tools.service_sheet import read_service_sheet
_logger = getLogger(__name__)
Model_vg = namedtuple('Model_vg', 'nf1 nf2 delta_p orig_nf_min orig_nf_max')
Model_fg = namedtuple('Model_fg', 'nf0')
Model_openroadm_ila = namedtuple('Model_openroadm_ila', 'nf_coef')
Model_hybrid = namedtuple('Model_hybrid', 'nf_ram gain_ram edfa_variety')
Model_dual_stage = namedtuple('Model_dual_stage', 'preamp_variety booster_variety')
class Model_openroadm_preamp:
pass
class Model_openroadm_booster:
pass
class _JsonThing:
def update_attr(self, default_values, kwargs, name):
clean_kwargs = {k: v for k, v in kwargs.items() if v != ''}
for k, v in default_values.items():
setattr(self, k, clean_kwargs.get(k, v))
if k not in clean_kwargs and name != 'Amp':
print(ansi_escapes.red +
f'\n WARNING missing {k} attribute in eqpt_config.json[{name}]' +
f'\n default value is {k} = {v}' +
ansi_escapes.reset)
class SI(_JsonThing):
default_values = {
"f_min": 191.35e12,
"f_max": 196.1e12,
"baud_rate": 32e9,
"spacing": 50e9,
"power_dbm": 0,
"power_range_db": [0, 0, 0.5],
"roll_off": 0.15,
"tx_osnr": 45,
"sys_margins": 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'SI')
class Span(_JsonThing):
default_values = {
'power_mode': True,
'delta_power_range_db': None,
'max_fiber_lineic_loss_for_raman': 0.25,
'target_extended_gain': 2.5,
'max_length': 150,
'length_units': 'km',
'max_loss': None,
'padding': 10,
'EOL': 0,
'con_in': 0,
'con_out': 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Span')
class Roadm(_JsonThing):
default_values = {
'target_pch_out_db': -17,
'add_drop_osnr': 100,
'pmd': 0,
'pdl': 0,
'restrictions': {
'preamp_variety_list': [],
'booster_variety_list': []
}
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Roadm')
class Transceiver(_JsonThing):
default_values = {
'type_variety': None,
'frequency': None,
'mode': {}
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Transceiver')
for mode_params in self.mode:
penalties = mode_params.get('penalties')
mode_params['penalties'] = {}
if not penalties:
continue
for impairment in ('chromatic_dispersion', 'pmd', 'pdl'):
imp_penalties = [p for p in penalties if impairment in p]
if not imp_penalties:
continue
if all(p[impairment] > 0 for p in imp_penalties):
# make sure the list of penalty values include a proper lower boundary
# (we assume 0 penalty for 0 impairment)
imp_penalties.insert(0, {impairment: 0, 'penalty_value': 0})
# make sure the list of penalty values are sorted by impairment value
imp_penalties.sort(key=lambda i: i[impairment])
# rearrange as dict of lists instead of list of dicts
mode_params['penalties'][impairment] = {
'up_to_boundary': [p[impairment] for p in imp_penalties],
'penalty_value': [p['penalty_value'] for p in imp_penalties]
}
class Fiber(_JsonThing):
default_values = {
'type_variety': '',
'dispersion': None,
'effective_area': None,
'pmd_coef': 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, self.__class__.__name__)
for optional in ['gamma', 'raman_efficiency']:
if optional in kwargs:
setattr(self, optional, kwargs[optional])
class RamanFiber(Fiber):
pass
class Amp(_JsonThing):
default_values = {
'f_min': 191.35e12,
'f_max': 196.1e12,
'type_variety': '',
'type_def': '',
'gain_flatmax': None,
'gain_min': None,
'p_max': None,
'nf_model': None,
'dual_stage_model': None,
'nf_fit_coeff': None,
'nf_ripple': None,
'dgt': None,
'gain_ripple': None,
'out_voa_auto': False,
'allowed_for_design': False,
'raman': False,
'pmd': 0,
'pdl': 0
}
def __init__(self, **kwargs):
self.update_attr(self.default_values, kwargs, 'Amp')
@classmethod
def from_json(cls, filename, **kwargs):
config = Path(filename).parent / 'default_edfa_config.json'
type_variety = kwargs['type_variety']
type_def = kwargs.get('type_def', 'variable_gain') # default compatibility with older json eqpt files
nf_def = None
dual_stage_def = None
if type_def == 'fixed_gain':
try:
nf0 = kwargs.pop('nf0')
except KeyError: # nf0 is expected for a fixed gain amp
raise EquipmentConfigError(f'missing nf0 value input for amplifier: {type_variety} in equipment config')
for k in ('nf_min', 'nf_max'):
try:
del kwargs[k]
except KeyError:
pass
nf_def = Model_fg(nf0)
elif type_def == 'advanced_model':
config = Path(filename).parent / kwargs.pop('advanced_config_from_json')
elif type_def == 'variable_gain':
gain_min, gain_max = kwargs['gain_min'], kwargs['gain_flatmax']
try: # nf_min and nf_max are expected for a variable gain amp
nf_min = kwargs.pop('nf_min')
nf_max = kwargs.pop('nf_max')
except KeyError:
raise EquipmentConfigError(f'missing nf_min or nf_max value input for amplifier: {type_variety} in equipment config')
try: # remove all remaining nf inputs
del kwargs['nf0']
except KeyError:
pass # nf0 is not needed for variable gain amp
nf1, nf2, delta_p = estimate_nf_model(type_variety, gain_min, gain_max, nf_min, nf_max)
nf_def = Model_vg(nf1, nf2, delta_p, nf_min, nf_max)
elif type_def == 'openroadm':
try:
nf_coef = kwargs.pop('nf_coef')
except KeyError: # nf_coef is expected for openroadm amp
raise EquipmentConfigError(f'missing nf_coef input for amplifier: {type_variety} in equipment config')
nf_def = Model_openroadm_ila(nf_coef)
elif type_def == 'openroadm_preamp':
nf_def = Model_openroadm_preamp()
elif type_def == 'openroadm_booster':
nf_def = Model_openroadm_booster()
elif type_def == 'dual_stage':
try: # nf_ram and gain_ram are expected for a hybrid amp
preamp_variety = kwargs.pop('preamp_variety')
booster_variety = kwargs.pop('booster_variety')
except KeyError:
raise EquipmentConfigError(f'missing preamp/booster variety input for amplifier: {type_variety} in equipment config')
dual_stage_def = Model_dual_stage(preamp_variety, booster_variety)
else:
raise EquipmentConfigError(f'Edfa type_def {type_def} does not exist')
json_data = load_json(config)
return cls(**{**kwargs, **json_data,
'nf_model': nf_def, 'dual_stage_model': dual_stage_def})
def _automatic_spacing(baud_rate):
"""return the min possible channel spacing for a given baud rate"""
# TODO : this should parametrized in a cfg file
# list of possible tuples [(max_baud_rate, spacing_for_this_baud_rate)]
spacing_list = [(33e9, 37.5e9), (38e9, 50e9), (50e9, 62.5e9), (67e9, 75e9), (92e9, 100e9)]
return min((s[1] for s in spacing_list if s[0] > baud_rate), default=baud_rate * 1.2)
def load_equipment(filename):
json_data = load_json(filename)
return _equipment_from_json(json_data, filename)
def _update_dual_stage(equipment):
edfa_dict = equipment['Edfa']
for edfa in edfa_dict.values():
if edfa.type_def == 'dual_stage':
edfa_preamp = edfa_dict[edfa.dual_stage_model.preamp_variety]
edfa_booster = edfa_dict[edfa.dual_stage_model.booster_variety]
for key, value in edfa_preamp.__dict__.items():
attr_k = 'preamp_' + key
setattr(edfa, attr_k, value)
for key, value in edfa_booster.__dict__.items():
attr_k = 'booster_' + key
setattr(edfa, attr_k, value)
edfa.p_max = edfa_booster.p_max
edfa.gain_flatmax = edfa_booster.gain_flatmax + edfa_preamp.gain_flatmax
if edfa.gain_min < edfa_preamp.gain_min:
raise EquipmentConfigError(f'Dual stage {edfa.type_variety} minimal gain is lower than its preamp minimal gain')
return equipment
def _roadm_restrictions_sanity_check(equipment):
""" verifies that booster and preamp restrictions specified in roadm equipment are listed
in the edfa.
"""
restrictions = equipment['Roadm']['default'].restrictions['booster_variety_list'] + \
equipment['Roadm']['default'].restrictions['preamp_variety_list']
for amp_name in restrictions:
if amp_name not in equipment['Edfa']:
raise EquipmentConfigError(f'ROADM restriction {amp_name} does not refer to a defined EDFA name')
def _check_fiber_vs_raman_fiber(equipment):
"""Ensure that Fiber and RamanFiber with the same name define common properties equally"""
if 'RamanFiber' not in equipment:
return
for fiber_type in set(equipment['Fiber'].keys()) & set(equipment['RamanFiber'].keys()):
for attr in ('dispersion', 'dispersion-slope', 'effective_area', 'gamma', 'pmd-coefficient'):
fiber = equipment['Fiber'][fiber_type]
raman = equipment['RamanFiber'][fiber_type]
a = getattr(fiber, attr, None)
b = getattr(raman, attr, None)
if a != b:
raise EquipmentConfigError(f'WARNING: Fiber and RamanFiber definition of "{fiber_type}" '
f'disagrees for "{attr}": {a} != {b}')
def _equipment_from_json(json_data, filename):
"""build global dictionnary eqpt_library that stores all eqpt characteristics:
edfa type type_variety, fiber type_variety
from the eqpt_config.json (filename parameter)
also read advanced_config_from_json file parameters for edfa if they are available:
typically nf_ripple, dfg gain ripple, dgt and nf polynomial nf_fit_coeff
if advanced_config_from_json file parameter is not present: use nf_model:
requires nf_min and nf_max values boundaries of the edfa gain range
"""
equipment = {}
for key, entries in json_data.items():
equipment[key] = {}
for entry in entries:
subkey = entry.get('type_variety', 'default')
if key == 'Edfa':
equipment[key][subkey] = Amp.from_json(filename, **entry)
elif key == 'Fiber':
equipment[key][subkey] = Fiber(**entry)
elif key == 'Span':
equipment[key][subkey] = Span(**entry)
elif key == 'Roadm':
equipment[key][subkey] = Roadm(**entry)
elif key == 'SI':
equipment[key][subkey] = SI(**entry)
elif key == 'Transceiver':
equipment[key][subkey] = Transceiver(**entry)
elif key == 'RamanFiber':
equipment[key][subkey] = RamanFiber(**entry)
else:
raise EquipmentConfigError(f'Unrecognized network element type "{key}"')
_check_fiber_vs_raman_fiber(equipment)
equipment = _update_dual_stage(equipment)
_roadm_restrictions_sanity_check(equipment)
return equipment
def load_network(filename, equipment):
if filename.suffix.lower() in ('.xls', '.xlsx'):
json_data = xls_to_json_data(filename)
elif filename.suffix.lower() == '.json':
json_data = load_json(filename)
else:
raise ValueError(f'unsupported topology filename extension {filename.suffix.lower()}')
return network_from_json(json_data, equipment)
def save_network(network: DiGraph, filename: str):
'''Dump the network into a JSON file
:param network: network to work on
:param filename: file to write to
'''
save_json(network_to_json(network), filename)
def _cls_for(equipment_type):
if equipment_type == 'Edfa':
return elements.Edfa
if equipment_type == 'Fused':
return elements.Fused
elif equipment_type == 'Roadm':
return elements.Roadm
elif equipment_type == 'Transceiver':
return elements.Transceiver
elif equipment_type == 'Fiber':
return elements.Fiber
elif equipment_type == 'RamanFiber':
return elements.RamanFiber
else:
raise ConfigurationError(f'Unknown network equipment "{equipment_type}"')
def network_from_json(json_data, equipment):
# NOTE|dutc: we could use the following, but it would tie our data format
# too closely to the graph library
# from networkx import node_link_graph
g = DiGraph()
for el_config in json_data['elements']:
typ = el_config.pop('type')
variety = el_config.pop('type_variety', 'default')
cls = _cls_for(typ)
if typ == 'Fused':
# well, there's no variety for the 'Fused' node type
pass
elif variety in equipment[typ]:
extra_params = equipment[typ][variety]
temp = el_config.setdefault('params', {})
temp = merge_amplifier_restrictions(temp, extra_params.__dict__)
el_config['params'] = temp
el_config['type_variety'] = variety
elif (typ in ['Fiber', 'RamanFiber']) or (typ == 'Edfa' and variety not in ['default', '']):
raise ConfigurationError(f'The {typ} of variety type {variety} was not recognized:'
'\nplease check it is properly defined in the eqpt_config json file')
el = cls(**el_config)
g.add_node(el)
nodes = {k.uid: k for k in g.nodes()}
for cx in json_data['connections']:
from_node, to_node = cx['from_node'], cx['to_node']
try:
if isinstance(nodes[from_node], elements.Fiber):
edge_length = nodes[from_node].params.length
else:
edge_length = 0.01
g.add_edge(nodes[from_node], nodes[to_node], weight=edge_length)
except KeyError:
raise NetworkTopologyError(f'can not find {from_node} or {to_node} defined in {cx}')
return g
def network_to_json(network):
data = {
'elements': [n.to_json for n in network]
}
connections = {
'connections': [{"from_node": n.uid,
"to_node": next_n.uid}
for n in network
for next_n in network.successors(n) if next_n is not None]
}
data.update(connections)
return data
def load_json(filename):
with open(filename, 'r', encoding='utf-8') as f:
data = json.load(f)
return data
def save_json(obj, filename):
with open(filename, 'w', encoding='utf-8') as f:
json.dump(obj, f, indent=2, ensure_ascii=False)
def load_requests(filename, eqpt, bidir, network, network_filename):
""" loads the requests from a json or an excel file into a data string
"""
if filename.suffix.lower() in ('.xls', '.xlsx'):
_logger.info('Automatically converting requests from XLS to JSON')
try:
return convert_service_sheet(filename, eqpt, network, network_filename=network_filename, bidir=bidir)
except ServiceError as this_e:
print(f'{ansi_escapes.red}Service error:{ansi_escapes.reset} {this_e}')
exit(1)
else:
return load_json(filename)
def requests_from_json(json_data, equipment):
"""Extract list of requests from data parsed from JSON"""
requests_list = []
for req in json_data['path-request']:
# init all params from request
params = {}
params['request_id'] = req['request-id']
params['source'] = req['source']
params['bidir'] = req['bidirectional']
params['destination'] = req['destination']
params['trx_type'] = req['path-constraints']['te-bandwidth']['trx_type']
params['trx_mode'] = req['path-constraints']['te-bandwidth']['trx_mode']
params['format'] = params['trx_mode']
params['spacing'] = req['path-constraints']['te-bandwidth']['spacing']
try:
nd_list = req['explicit-route-objects']['route-object-include-exclude']
except KeyError:
nd_list = []
params['nodes_list'] = [n['num-unnum-hop']['node-id'] for n in nd_list]
params['loose_list'] = [n['num-unnum-hop']['hop-type'] for n in nd_list]
# recover trx physical param (baudrate, ...) from type and mode
# in trx_mode_params optical power is read from equipment['SI']['default'] and
# nb_channel is computed based on min max frequency and spacing
trx_params = trx_mode_params(equipment, params['trx_type'], params['trx_mode'], True)
params.update(trx_params)
# print(trx_params['min_spacing'])
# optical power might be set differently in the request. if it is indicated then the
# params['power'] is updated
try:
if req['path-constraints']['te-bandwidth']['output-power']:
params['power'] = req['path-constraints']['te-bandwidth']['output-power']
except KeyError:
pass
# same process for nb-channel
f_min = params['f_min']
f_max_from_si = params['f_max']
try:
if req['path-constraints']['te-bandwidth']['max-nb-of-channel'] is not None:
nch = req['path-constraints']['te-bandwidth']['max-nb-of-channel']
params['nb_channel'] = nch
spacing = params['spacing']
params['f_max'] = automatic_fmax(f_min, spacing, nch)
else:
params['nb_channel'] = automatic_nch(f_min, f_max_from_si, params['spacing'])
except KeyError:
params['nb_channel'] = automatic_nch(f_min, f_max_from_si, params['spacing'])
params['effective_freq_slot'] = req['path-constraints']['te-bandwidth'].get('effective-freq-slot', [None])[0]
try:
params['path_bandwidth'] = req['path-constraints']['te-bandwidth']['path_bandwidth']
except KeyError:
pass
_check_one_request(params, f_max_from_si)
requests_list.append(PathRequest(**params))
return requests_list
def _check_one_request(params, f_max_from_si):
"""Checks that the requested parameters are consistant (spacing vs nb channel vs transponder mode...)"""
f_min = params['f_min']
f_max = params['f_max']
max_recommanded_nb_channels = automatic_nch(f_min, f_max, params['spacing'])
if params['baud_rate'] is not None:
# implicitly means that a mode is defined with min_spacing
if params['min_spacing'] > params['spacing']:
msg = f'Request {params["request_id"]} has spacing below transponder ' +\
f'{params["trx_type"]} {params["trx_mode"]} min spacing value ' +\
f'{params["min_spacing"]*1e-9}GHz.\nComputation stopped'
print(msg)
_logger.critical(msg)
raise ServiceError(msg)
if f_max > f_max_from_si:
msg = f'''Requested channel number {params["nb_channel"]}, baud rate {params["baud_rate"]} GHz
and requested spacing {params["spacing"]*1e-9}GHz is not consistent with frequency range
{f_min*1e-12} THz, {f_max*1e-12} THz, min recommanded spacing {params["min_spacing"]*1e-9}GHz.
max recommanded nb of channels is {max_recommanded_nb_channels}.'''
_logger.critical(msg)
raise ServiceError(msg)
# Transponder mode already selected; will it fit to the requested bandwidth?
if params['trx_mode'] is not None and params['effective_freq_slot'] is not None \
and params['effective_freq_slot']['M'] is not None:
_, requested_m = compute_spectrum_slot_vs_bandwidth(params['path_bandwidth'],
params['spacing'],
params['bit_rate'])
# params['effective_freq_slot']['M'] value should be bigger than the computed requested_m (simple estimate)
# TODO: elaborate a more accurate estimate with nb_wl * tx_osnr + possibly guardbands in case of
# superchannel closed packing.
if requested_m > params['effective_freq_slot']['M']:
msg = f'requested M {params["effective_freq_slot"]["M"]} number of slots for request' +\
f'{params["request_id"]} should be greater than {requested_m} to support request' +\
f'{params["path_bandwidth"] * 1e-9} Gbit/s with {params["trx_type"]} {params["trx_mode"]}'
_logger.critical(msg)
raise ServiceError(msg)
def disjunctions_from_json(json_data):
""" reads the disjunction requests from the json dict and create the list
of requested disjunctions for this set of requests
"""
disjunctions_list = []
if 'synchronization' in json_data:
for snc in json_data['synchronization']:
params = {}
params['disjunction_id'] = snc['synchronization-id']
params['relaxable'] = snc['svec']['relaxable']
params['link_diverse'] = 'link' in snc['svec']['disjointness']
params['node_diverse'] = 'node' in snc['svec']['disjointness']
params['disjunctions_req'] = snc['svec']['request-id-number']
disjunctions_list.append(Disjunction(**params))
return disjunctions_list
def convert_service_sheet(
input_filename,
eqpt,
network,
network_filename=None,
output_filename='',
bidir=False):
if output_filename == '':
output_filename = f'{str(input_filename)[0:len(str(input_filename))-len(str(input_filename.suffixes[0]))]}_services.json'
data = read_service_sheet(input_filename, eqpt, network, network_filename, bidir)
save_json(data, output_filename)
return data

View File

@@ -1,70 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
'''
gnpy.tools.plots
================
Graphs and plots usable from a CLI application
'''
from matplotlib.pyplot import show, axis, figure, title, text
from networkx import draw_networkx
from gnpy.core.elements import Transceiver
def _try_city(node):
return node.location.city if node.location.city else node.uid
def plot_baseline(network):
pos = {n: (n.lng, n.lat) for n in network.nodes()}
labels = {n: _try_city(n) for n in network.nodes() if isinstance(n, Transceiver)}
draw_networkx(network, pos=pos, node_size=50, node_color='#ababab', edge_color='#ababab',
labels=labels, font_size=14)
axis('off')
show()
def plot_results(network, path, source, destination):
path_edges = set(zip(path[:-1], path[1:]))
edges = set(network.edges()) - path_edges
nodes = [n for n in network.nodes() if n not in path]
pos = {n: (n.lng, n.lat) for n in network.nodes()}
nodes_by_pos = {}
for k, (x, y) in pos.items():
nodes_by_pos.setdefault((round(x, 1), round(y, 1)), []).append(k)
labels = {n: _try_city(n) for n in network.nodes() if isinstance(n, Transceiver)}
fig = figure()
draw_networkx(network, pos=pos, labels=labels, font_size=14,
nodelist=nodes, node_color='#ababab', node_size=50,
edgelist=edges, edge_color='#ababab')
draw_networkx(network, pos=pos, with_labels=False,
nodelist=path, node_color='#ff0000', node_size=55,
edgelist=path_edges, edge_color='#ff0000')
title(f'Propagating from {_try_city(source)} to {_try_city(destination)}')
axis('off')
heading = 'Spectral Information\n\n'
textbox = text(0.85, 0.20, heading, fontsize=14, fontname='Ubuntu Mono',
verticalalignment='top', transform=fig.axes[0].transAxes,
bbox={'boxstyle': 'round', 'facecolor': 'wheat', 'alpha': 0.5})
msgs = {(x, y): heading + '\n\n'.join(str(n) for n in ns if n in path)
for (x, y), ns in nodes_by_pos.items()}
def hover(event):
if event.xdata is None or event.ydata is None:
return
if fig.contains(event):
x, y = round(event.xdata, 1), round(event.ydata, 1)
if (x, y) in msgs:
textbox.set_text(msgs[x, y])
else:
textbox.set_text(heading)
fig.canvas.draw_idle()
fig.canvas.mpl_connect('motion_notify_event', hover)
show()

View File

@@ -1,378 +0,0 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
gnpy.tools.service_sheet
========================
XLS parser that can be called to create a JSON request file in accordance with
Yang model for requesting path computation.
See: draft-ietf-teas-yang-path-computation-01.txt
"""
from xlrd import open_workbook, XL_CELL_EMPTY
from collections import namedtuple
from logging import getLogger
from copy import deepcopy
from gnpy.core.utils import db2lin
from gnpy.core.exceptions import ServiceError
from gnpy.core.elements import Transceiver, Roadm, Edfa, Fiber
import gnpy.core.ansi_escapes as ansi_escapes
from gnpy.tools.convert import corresp_names, corresp_next_node
SERVICES_COLUMN = 12
def all_rows(sheet, start=0):
return (sheet.row(x) for x in range(start, sheet.nrows))
logger = getLogger(__name__)
class Request(namedtuple('Request', 'request_id source destination trx_type mode \
spacing power nb_channel disjoint_from nodes_list is_loose path_bandwidth')):
def __new__(cls, request_id, source, destination, trx_type, mode=None, spacing=None, power=None, nb_channel=None, disjoint_from='', nodes_list=None, is_loose='', path_bandwidth=None):
return super().__new__(cls, request_id, source, destination, trx_type, mode, spacing, power, nb_channel, disjoint_from, nodes_list, is_loose, path_bandwidth)
class Element:
def __eq__(self, other):
return type(self) == type(other) and self.uid == other.uid
def __hash__(self):
return hash((type(self), self.uid))
class Request_element(Element):
def __init__(self, Request, equipment, bidir):
# request_id is str
# excel has automatic number formatting that adds .0 on integer values
# the next lines recover the pure int value, assuming this .0 is unwanted
self.request_id = correct_xlrd_int_to_str_reading(Request.request_id)
self.source = f'trx {Request.source}'
self.destination = f'trx {Request.destination}'
# TODO: the automatic naming generated by excel parser requires that source and dest name
# be a string starting with 'trx' : this is manually added here.
self.srctpid = f'trx {Request.source}'
self.dsttpid = f'trx {Request.destination}'
self.bidir = bidir
# test that trx_type belongs to eqpt_config.json
# if not replace it with a default
try:
if equipment['Transceiver'][Request.trx_type]:
self.trx_type = correct_xlrd_int_to_str_reading(Request.trx_type)
if Request.mode is not None:
Requestmode = correct_xlrd_int_to_str_reading(Request.mode)
if [mode for mode in equipment['Transceiver'][Request.trx_type].mode if mode['format'] == Requestmode]:
self.mode = Requestmode
else:
msg = f'Request Id: {self.request_id} - could not find tsp : \'{Request.trx_type}\' with mode: \'{Requestmode}\' in eqpt library \nComputation stopped.'
# print(msg)
logger.critical(msg)
raise ServiceError(msg)
else:
Requestmode = None
self.mode = Request.mode
except KeyError:
msg = f'Request Id: {self.request_id} - could not find tsp : \'{Request.trx_type}\' with mode: \'{Request.mode}\' in eqpt library \nComputation stopped.'
# print(msg)
logger.critical(msg)
raise ServiceError(msg)
# excel input are in GHz and dBm
if Request.spacing is not None:
self.spacing = Request.spacing * 1e9
else:
msg = f'Request {self.request_id} missing spacing: spacing is mandatory.\ncomputation stopped'
logger.critical(msg)
raise ServiceError(msg)
if Request.power is not None:
self.power = db2lin(Request.power) * 1e-3
else:
self.power = None
if Request.nb_channel is not None:
self.nb_channel = int(Request.nb_channel)
else:
self.nb_channel = None
value = correct_xlrd_int_to_str_reading(Request.disjoint_from)
self.disjoint_from = [n for n in value.split(' | ') if value]
self.nodes_list = []
if Request.nodes_list:
self.nodes_list = Request.nodes_list.split(' | ')
self.loose = 'LOOSE'
if Request.is_loose.lower() == 'no':
self.loose = 'STRICT'
self.path_bandwidth = None
if Request.path_bandwidth is not None:
self.path_bandwidth = Request.path_bandwidth * 1e9
else:
self.path_bandwidth = 0
uid = property(lambda self: repr(self))
@property
def pathrequest(self):
# Default assumption for bidir is False
req_dictionnary = {
'request-id': self.request_id,
'source': self.source,
'destination': self.destination,
'src-tp-id': self.srctpid,
'dst-tp-id': self.dsttpid,
'bidirectional': self.bidir,
'path-constraints': {
'te-bandwidth': {
'technology': 'flexi-grid',
'trx_type': self.trx_type,
'trx_mode': self.mode,
'effective-freq-slot': [{'N': None, 'M': None}],
'spacing': self.spacing,
'max-nb-of-channel': self.nb_channel,
'output-power': self.power
}
}
}
if self.nodes_list:
req_dictionnary['explicit-route-objects'] = {}
temp = {'route-object-include-exclude': [
{'explicit-route-usage': 'route-include-ero',
'index': self.nodes_list.index(node),
'num-unnum-hop': {
'node-id': f'{node}',
'link-tp-id': 'link-tp-id is not used',
'hop-type': f'{self.loose}',
}
}
for node in self.nodes_list]
}
req_dictionnary['explicit-route-objects'] = temp
if self.path_bandwidth is not None:
req_dictionnary['path-constraints']['te-bandwidth']['path_bandwidth'] = self.path_bandwidth
return req_dictionnary
@property
def pathsync(self):
if self.disjoint_from:
return {'synchronization-id': self.request_id,
'svec': {
'relaxable': 'false',
'disjointness': 'node link',
'request-id-number': [self.request_id] + [n for n in self.disjoint_from]
}
}
else:
return None
# TO-DO: avoid multiple entries with same synchronisation vectors
@property
def json(self):
return self.pathrequest, self.pathsync
def read_service_sheet(
input_filename,
eqpt,
network,
network_filename=None,
bidir=False):
""" converts a service sheet into a json structure
"""
if network_filename is None:
network_filename = input_filename
service = parse_excel(input_filename)
req = [Request_element(n, eqpt, bidir) for n in service]
req = correct_xls_route_list(network_filename, network, req)
# if there is no sync vector , do not write any synchronization
synchro = [n.json[1] for n in req if n.json[1] is not None]
if synchro:
data = {
'path-request': [n.json[0] for n in req],
'synchronization': synchro
}
else:
data = {
'path-request': [n.json[0] for n in req]
}
return data
def correct_xlrd_int_to_str_reading(v):
if not isinstance(v, str):
value = str(int(v))
if value.endswith('.0'):
value = value[:-2]
else:
value = v
return value
def parse_row(row, fieldnames):
return {f: r.value for f, r in zip(fieldnames, row[0:SERVICES_COLUMN])
if r.ctype != XL_CELL_EMPTY}
def parse_excel(input_filename):
with open_workbook(input_filename) as wb:
service_sheet = wb.sheet_by_name('Service')
services = list(parse_service_sheet(service_sheet))
return services
def parse_service_sheet(service_sheet):
""" reads each column according to authorized fieldnames. order is not important.
"""
logger.info(f'Validating headers on {service_sheet.name!r}')
# add a test on field to enable the '' field case that arises when columns on the
# right hand side are used as comments or drawing in the excel sheet
header = [x.value.strip() for x in service_sheet.row(4)[0:SERVICES_COLUMN]
if len(x.value.strip()) > 0]
# create a service_fieldname independant from the excel column order
# to be compatible with any version of the sheet
# the following dictionnary records the excel field names and the corresponding parameter's name
authorized_fieldnames = {
'route id': 'request_id', 'Source': 'source', 'Destination': 'destination',
'TRX type': 'trx_type', 'Mode': 'mode', 'System: spacing': 'spacing',
'System: input power (dBm)': 'power', 'System: nb of channels': 'nb_channel',
'routing: disjoint from': 'disjoint_from', 'routing: path': 'nodes_list',
'routing: is loose?': 'is_loose', 'path bandwidth': 'path_bandwidth'}
try:
service_fieldnames = [authorized_fieldnames[e] for e in header]
except KeyError:
msg = f'Malformed header on Service sheet: {header} field not in {authorized_fieldnames}'
logger.critical(msg)
raise ValueError(msg)
for row in all_rows(service_sheet, start=5):
yield Request(**parse_row(row[0:SERVICES_COLUMN], service_fieldnames))
def correct_xls_route_list(network_filename, network, pathreqlist):
""" prepares the format of route list of nodes to be consistant with nodes names:
remove wrong names, find correct names for ila, roadm and fused if the entry was
xls.
if it was not xls, all names in list should be exact name in the network.
"""
# first loads the base correspondance dict built with excel naming
corresp_roadm, corresp_fused, corresp_ila = corresp_names(network_filename, network)
# then correct dict names with names of the autodisign and find next_node name
# according to xls naming
corresp_ila, next_node = corresp_next_node(network, corresp_ila, corresp_roadm)
# finally correct constraints based on these dict
trxfibertype = [n.uid for n in network.nodes() if isinstance(n, (Transceiver, Fiber))]
roadmtype = [n.uid for n in network.nodes() if isinstance(n, Roadm)]
edfatype = [n.uid for n in network.nodes() if isinstance(n, Edfa)]
# TODO there is a problem of identification of fibers in case of parallel
# fibers between two adjacent roadms so fiber constraint is not supported
transponders = [n.uid for n in network.nodes() if isinstance(n, Transceiver)]
for pathreq in pathreqlist:
# first check that source and dest are transceivers
if pathreq.source not in transponders:
msg = f'{ansi_escapes.red}Request: {pathreq.request_id}: could not find' +\
f' transponder source : {pathreq.source}.{ansi_escapes.reset}'
logger.critical(msg)
raise ServiceError(msg)
if pathreq.destination not in transponders:
msg = f'{ansi_escapes.red}Request: {pathreq.request_id}: could not find' +\
f' transponder destination: {pathreq.destination}.{ansi_escapes.reset}'
logger.critical(msg)
raise ServiceError(msg)
# silently pop source and dest nodes from the list if they were added by the user as first
# and last elem in the constraints respectively. Other positions must lead to an error
# caught later on
if pathreq.nodes_list and pathreq.source == pathreq.nodes_list[0]:
pathreq.loose_list.pop(0)
pathreq.nodes_list.pop(0)
if pathreq.nodes_list and pathreq.destination == pathreq.nodes_list[-1]:
pathreq.loose_list.pop(-1)
pathreq.nodes_list.pop(-1)
# Then process user defined constraints with respect to automatic namings
temp = deepcopy(pathreq)
# This needs a temporary object since we may suppress/correct elements in the list
# during the process
for i, n_id in enumerate(temp.nodes_list):
# n_id must not be a transceiver and must not be a fiber (non supported, user
# can not enter fiber names in excel)
if n_id not in trxfibertype:
# check that n_id is in the node list, if not find a correspondance name
if n_id in roadmtype + edfatype:
nodes_suggestion = [n_id]
else:
# checks first roadm, fused, and ila in this order, because ila automatic name
# contain roadm names. If it is a fused node, next ila names might be correct
# suggestions, especially if following fibers were splitted and ila names
# created with the name of the fused node
if n_id in corresp_roadm.keys():
nodes_suggestion = corresp_roadm[n_id]
elif n_id in corresp_fused.keys():
nodes_suggestion = corresp_fused[n_id] + corresp_ila[n_id]
elif n_id in corresp_ila.keys():
nodes_suggestion = corresp_ila[n_id]
else:
nodes_suggestion = []
if nodes_suggestion:
try:
if len(nodes_suggestion) > 1:
# if there is more than one suggestion, we need to choose the direction
# we rely on the next node provided by the user for this purpose
new_n = next(n for n in nodes_suggestion
if n in next_node.keys() and next_node[n]
in temp.nodes_list[i:] + [pathreq.destination] and
next_node[n] not in temp.nodes_list[:i])
else:
new_n = nodes_suggestion[0]
if new_n != n_id:
# warns the user when the correct name is used only in verbose mode,
# eg 'a' is a roadm and correct name is 'roadm a' or when there was
# too much ambiguity, 'b' is an ila, its name can be:
# Edfa0_fiber (a → b)-xx if next node is c or
# Edfa0_fiber (c → b)-xx if next node is a
msg = f'{ansi_escapes.yellow}Invalid route node specified:' +\
f'\n\t\'{n_id}\', replaced with \'{new_n}\'{ansi_escapes.reset}'
logger.info(msg)
pathreq.nodes_list[pathreq.nodes_list.index(n_id)] = new_n
except StopIteration:
# shall not come in this case, unless requested direction does not exist
msg = f'{ansi_escapes.yellow}Invalid route specified {n_id}: could' +\
f' not decide on direction, skipped!.\nPlease add a valid' +\
f' direction in constraints (next neighbour node){ansi_escapes.reset}'
print(msg)
logger.info(msg)
pathreq.loose_list.pop(pathreq.nodes_list.index(n_id))
pathreq.nodes_list.remove(n_id)
else:
if temp.loose_list[i] == 'LOOSE':
# if no matching can be found in the network just ignore this constraint
# if it is a loose constraint
# warns the user that this node is not part of the topology
msg = f'{ansi_escapes.yellow}Invalid node specified:\n\t\'{n_id}\'' +\
f', could not use it as constraint, skipped!{ansi_escapes.reset}'
print(msg)
logger.info(msg)
pathreq.loose_list.pop(pathreq.nodes_list.index(n_id))
pathreq.nodes_list.remove(n_id)
else:
msg = f'{ansi_escapes.red}Could not find node:\n\t\'{n_id}\' in network' +\
f' topology. Strict constraint can not be applied.{ansi_escapes.reset}'
logger.critical(msg)
raise ServiceError(msg)
else:
if temp.loose_list[i] == 'LOOSE':
print(f'{ansi_escapes.yellow}Invalid route node specified:\n\t\'{n_id}\'' +
f' type is not supported as constraint with xls network input,' +
f' skipped!{ansi_escapes.reset}')
pathreq.loose_list.pop(pathreq.nodes_list.index(n_id))
pathreq.nodes_list.remove(n_id)
else:
msg = f'{ansi_escapes.red}Invalid route node specified \n\t\'{n_id}\'' +\
f' type is not supported as constraint with xls network input,' +\
f', Strict constraint can not be applied.{ansi_escapes.reset}'
logger.critical(msg)
raise ServiceError(msg)
return pathreqlist

View File

@@ -1,3 +0,0 @@
'''
Tracking :py:mod:`.request` for spectrum and their :py:mod:`.spectrum_assignment`.
'''

Some files were not shown because too many files have changed in this diff Show More