Now you can request additional data and/or customized columns!

Try It Now!

UNECE/CEFACT package codes

Certified

core

Files Size Format Created Updated License Source
2 47kB csv zip 6 years ago 6 years ago ODC-PDDL-1.0 UNECE
Coded representations of the package type names used in International Trade (UNECE/CEFACT Trade Facilitation Recommendation No.21) Data Source of information is from the UNECE website: read more
Download Developers

Data Files

Download files in this dataset

File Description Size Last changed Download
data 18kB csv (18kB) , json (33kB)
unece-package-codes_zip Compressed versions of dataset. Includes normalized CSV and JSON data with original data and datapackage.json. 20kB zip (20kB)

data  

Signup to Premium Service for additional or customised data - Get Started

This is a preview version. There might be more data in the original version.

Field information

Field Name Order Type (Format) Description
Code 1 string A 2 character alpha numeric code value agreed by the UN/CEFACT content management group
Name 2 string
Description 3 string

Integrate this dataset into your favourite tool

Use our data-cli tool designed for data wranglers:

data get https://datahub.io/core/unece-package-codes
data info core/unece-package-codes
tree core/unece-package-codes
# Get a list of dataset's resources
curl -L -s https://datahub.io/core/unece-package-codes/datapackage.json | grep path

# Get resources

curl -L https://datahub.io/core/unece-package-codes/r/0.csv

curl -L https://datahub.io/core/unece-package-codes/r/1.zip

If you are using R here's how to get the data you want quickly loaded:

install.packages("jsonlite", repos="https://cran.rstudio.com/")
library("jsonlite")

json_file <- 'https://datahub.io/core/unece-package-codes/datapackage.json'
json_data <- fromJSON(paste(readLines(json_file), collapse=""))

# get list of all resources:
print(json_data$resources$name)

# print all tabular data(if exists any)
for(i in 1:length(json_data$resources$datahub$type)){
  if(json_data$resources$datahub$type[i]=='derived/csv'){
    path_to_file = json_data$resources$path[i]
    data <- read.csv(url(path_to_file))
    print(data)
  }
}

Note: You might need to run the script with root permissions if you are running on Linux machine

Install the Frictionless Data data package library and the pandas itself:

pip install datapackage
pip install pandas

Now you can use the datapackage in the Pandas:

import datapackage
import pandas as pd

data_url = 'https://datahub.io/core/unece-package-codes/datapackage.json'

# to load Data Package into storage
package = datapackage.Package(data_url)

# to load only tabular data
resources = package.resources
for resource in resources:
    if resource.tabular:
        data = pd.read_csv(resource.descriptor['path'])
        print (data)

For Python, first install the `datapackage` library (all the datasets on DataHub are Data Packages):

pip install datapackage

To get Data Package into your Python environment, run following code:

from datapackage import Package

package = Package('https://datahub.io/core/unece-package-codes/datapackage.json')

# print list of all resources:
print(package.resource_names)

# print processed tabular data (if exists any)
for resource in package.resources:
    if resource.descriptor['datahub']['type'] == 'derived/csv':
        print(resource.read())

If you are using JavaScript, please, follow instructions below:

Install data.js module using npm:

  $ npm install data.js

Once the package is installed, use the following code snippet:

const {Dataset} = require('data.js')

const path = 'https://datahub.io/core/unece-package-codes/datapackage.json'

// We're using self-invoking function here as we want to use async-await syntax:
;(async () => {
  const dataset = await Dataset.load(path)
  // get list of all resources:
  for (const id in dataset.resources) {
    console.log(dataset.resources[id]._descriptor.name)
  }
  // get all tabular data(if exists any)
  for (const id in dataset.resources) {
    if (dataset.resources[id]._descriptor.format === "csv") {
      const file = dataset.resources[id]
      // Get a raw stream
      const stream = await file.stream()
      // entire file as a buffer (be careful with large files!)
      const buffer = await file.buffer
      // print data
      stream.pipe(process.stdout)
    }
  }
})()

Read me

Coded representations of the package type names used in International Trade (UNECE/CEFACT Trade Facilitation Recommendation No.21)

Data

Source of information is from the UNECE website: http://www.unece.org/tradewelcome/areas-of-work/un-centre-for-trade-facilitation-and-e-business-uncefact/outputs/cefactrecommendationsrec-index/list-of-trade-facilitation-recommendations-n-21-to-24.html

All data from UNECE has to be available in an easily distributable format, in this case it is an .xls file to process I simply removed any lines with a status of ‘X’ and removed the numeric code column as it’s of little useable value

Meaning of status codes:

A plus sign (+) Added. New unit added in this release of the code list.; A hash sign (#) Changed name. Changes to the unit name in this release of the code list; A vertical bar (¦) Changed characteristic(s). Changes other than to the unit name in this release of the code list, e.g. a change to the numeric code. A letter X (X) Marked as deleted. Code entries marked as deleted will be retained indefinitely in the code lists. When appropriate, these entries may subsequently be reinstated via the maintenance process; An equals Reinstated. Code entries previously sign (=) Marked as deleted and reinstated in this release of the code list.

Requests for addition to the codes should be made to the Information Content Management Group (ICG) at [email protected]

License

This data is made available under the Public Domain Dedication and License version v1.0 whose full text can be found at http://opendatacommons.org/licenses/pddl/ - See more at: http://opendatacommons.org/guide/#sthash.97PSVxmh.dpuf

Datapackage.json

Request Customized Data


Notifications of data updates and schema changes

Warranty / guaranteed updates

Workflow integration (e.g. Python packages, NPM packages)

Customized data (e.g. you need different or additional data)

Or suggest your own feature from the link below