by Kevin Rosamont

BIKE SERVICES API + SHINY = NICE APP

Using an bike services API and Shiny (which is a powerful web framework for building web applications using R), we can buid a wonderful real time web app.

Read more → pre code, pre, code { white-space: pre !important; overflow-x: scroll !important; overflow-y: scroll !important; word-break: keep-all !important; word-wrap: initial !important; max-height:30vh !important; } p img{ width:100%; !important; } -- Hi everyone, In this blog post, I will be short and I will introduce our shiny application on bike self-service stations. Your browser does not support the video tag. The code is in 2 parts, the ui.R file for the interface and the server.

by Bruno Rodrigues (guest)

Teaching Luxembourgish to my computer

This post tells the story of how we developed Liss, an AI for sentiment analysis in Luxembourguish trained on more than 50 000 movie reviews.

Read more → pre code, pre, code { white-space: pre !important; overflow-x: scroll !important; overflow-y: scroll !important; word-break: keep-all !important; word-wrap: initial !important; max-height:30vh !important; } p img{ width:100%; !important; } -- How we taught a computer to understand Luxembourguish Today we reveal a project that Kevin and myself have been working on for the past 2 months, Liss. Liss is a sentiment analysis artificial intelligence; you can let Liss read single words or whole sentences, and Liss will tell you if the overall sentiment is either positive or negative.

by Bruno Rodrigues (guest)

Analysis of the Renert - Part 3: Visualizations

In this series of blog posts, I show how you can scrape text from the internet and use it to perform a tidy text analysis. I analyze a Luxembourgish fable called Renert.

Read more → This is part 3 of a 3 part blog post. This post uses the data that was scraped in part 1 and prepared in part 2. Now that we have the data in a nice format, let’s make a frequency plot! First let’s load the data and the packages: library("tidyverse") library("ggthemes") # To use different themes and colors renert_tokenized = readRDS("renert_tokenized.rds") Using the ggplot2 package, I can produce a plot of the most frequent words.

by Bruno Rodrigues (guest)

Analysis of the Renert - Part 2: Data Processing

In this series of blog posts, I show how you can scrape text from the internet and use it to perform a tidy text analysis. I analyze a Luxembourgish fable called Renert.

Read more → This is part 2 of a 3 part blog post. This post uses the data that we scraped in part 1 and prepares it for further analysis, which is quite technical. If you’re only interested in the results of the analysis, skip to part 3! First, let’s load the data that we prepared in part 1. Let’s start with the full text: library("tidyverse") library("tidytext") renert = readRDS("renert_full.rds") I want to study the frequencies of words, so for this, I will use a function from the tidytext package called unnest_tokens() which breaks the text down into tokens.