Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Data

Books

Websites

Podcasts

Videos

Articles

On ethics

On color

On interactivity

Tools

A variety of useful toolkits have been designed to help support information visualization applications. Some include support for the full visualization pipeline from data to interactive graphics, while others focus only on a subset, typically graphics and interaction.

Visualization Cheatsheets

Visualization Toolkits

  • D3 JavaScript library for data-driven DOM manipulation, interaction and animation. Includes utilities for visualization techniques and SVG generation.
  • Vega Declarative language for representing visualizations. Vega will parse a visualization specification to produce a JavaScript-based visualization, using either HTML Canvas or SVG rendering. Vega is particularly useful for creating programs that produce visualizations as output.
  • Vega-Lite High-level visualization grammar that compiles concise specifications to full Vega specifications.
  • Processing or p5.js Java-like graphics and interaction language and IDE. Processing has a strong user community with many examples. p5.js is a sister project for JavaScript.
  • Leaflet Open-Source mapping library

Visualization Tools

  • Tableau for Students – Free version of Tableau for students
  • Tableau Public Free version of Tableau for publishing on the web
  • Voyager and Polestar Web-based data exploration tools from UW's Interactive Data Lab
  • Lyra Interactive visualization design environment
  • GGplot2 Graphics language for R
  • GGobi Classic system for visualizations of multivariate data

Visualization Programming Environments

Network Analysis Tools

  • Gephi Graph analysis application for Windows, Mac, and Linux
  • SNAP Graph analysis library for C++ and Python

Color Tools

  • Chroma.js Javascript library for dealing with colors
  • D3.js Javascript library with modules for dealing with colors
  • HCL Wizard Tool for viewing, manipulating, and choosing HCL color palettes
  • I Want Hue Tool for generating and refining palettes of optimally distinct colors
  • Colorbrewer Tool for finding sequential, diverging, and qualitative color palettes
  • Color Picker for Data Tool for picking color palettes
  • Accessible Color Matrix Tool for building accessible color palettes
  • Contrast Finder Tool for finding good contrasts between two colors
  • Chromaticity Guidance for accessible visualization color design
  • Color Oracle Free color blindness simulator for Window, Mac and Linux

Data Literacy Resources

Slides

Day 1: https://docs.google.com/presentation/d/e/2PACX-1vSq7ytkZLPS-Nbs5JYbeKG8EuYotCxRgUewmbgWWWCqGIV-KUX4AXCa-_5bbdYNilRd46n6p3F9IbmT/pub?start=false&loop=false&delayms=3000

Day 2: https://docs.google.com/presentation/d/e/2PACX-1vQlmDzzilVBUXY9wKUZxfWvaQTAqa2nmmydKBiBsYszybYCSC6t_cUAA-RG_n4032oEj1KudNCIHuZx/pub?start=false&loop=false&delayms=3000

General / good reads

Data quality

  • The Quartz Guide To Bad Data
  • Tidy Data – although the principle of tidy data stems from an R developer and the examples in this document are made in R, "tidy data" is a very valuable standard that you should achieve when working with data. Once your data is "tidy", visualization in R (or in any other language / framework) becomes easier. You can also look at the more formal and less R-heavy scientific paper.

Data formats / conversions

Regex

Geospatial data

Working with the CLI

The command line interface (CLI, often referred to as "shell" or "terminal") is a very powerful tool, and each operating system has one. Working with the CLI is easiest on Linux and Mac, and they are both similar since they are both based on Unix. I really recommend getting to know the 101 of working with the terminal, e.g. through this tutorial. Here are some more tips for working with data on the CLI. This Twitter account gives useful and sometimes funny tips on how to make the most of the terminal.

Data processing in R

Exercises

Exercise solutions day 1

  1. Manillio:
    1. Browse https://developer.spotify.com/console/get-search-item/ to get his ID: 7uxtLjuqkJ3cnjQQuW6Cul
    2. Browse https://developer.spotify.com/console/get-artist-top-tracks/, fill in values and get JSON data
    3. Copy and paste into https://json-csv.com/
    4. Download as Excel - do the math (36.60705 min)
    5. Take-Aways:
      1. Sometimes, data is accessible via an API
      2. The preferred data format of APIs is JSON
      3. JSON can be converted into CSV
      4. The preferred way of talking to an API is with code
  2. Wasserstation Tiefenbrunnen
    1. Browse https://www.tecson-data.ch/zurich/tiefenbrunnen/index.php (as probably shown on Google)
    2. Select “windchill”, 2.11.2018/7.11.2018 and “all values” at the very bottom
    3. Copy stuff into Excel by hand and calculate median
    1. OR: Browse https://tecdottir.herokuapp.com/docs/#/measurements
    2. Enter parameters
    3. Copy curl string and pipe into a file
    4. Upload JSON and paste into  https://json-csv.com/ (bonus: use matrix style)
    5. Download CSV, open in Excel and calculate median (don’t forget to filter unneeded dates)
    6. Take-Aways:
      1. Copying and pasting stuff from HTML tables should be avoided
      2. Always look out for an API
      3. Try out different settings of your tools - they might bring you better results (“matrix style”)
      4. Get to know the terminal
      5. Excel / LibreOffice / OpenOffice have some good filters: get to know how to use them
      6. If you run out of queries, delete cookies
  3. Schlichtungsverfahren
    1. Google it and go to https://www.bwo.admin.ch/bwo/de/home/mietrecht/schlichtungsbehoerden/statistik-der-schlichtungsverfahren.html
    2. Download first PDF
    3. Download Tabula and launch, upload PDF (or use Adobe Reader DC)
    4. Select last table, lattice extraction format
    5. Download as CSV
    6. Open in LibreOffice and make chart
    7. Take-Aways:
      1. Many interesting data are buried in PDFs
      2. Use proprietary software or Tabula to extract the data