teaching a transformer to understand how far apart (common) cities are.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
mm 502bbdba5e better targets formatting 2 years ago
.gitignore initial commit, working code 2 years ago
Makefile better targets formatting 2 years ago
README.md details in readme 2 years ago
debug_distance.py initial commit, working code 2 years ago
eval.py full training process on US cities 2 years ago
generate_data.py initial commit, working code 2 years ago
train.py full training 2 years ago

README.md

citybert

  1. Generates dataset of cities (US only for now) and their pair-wise geodesic distances.
  2. Uses that dataset to fine-tune a neural-net to understand that cities closer to one another are more similar.
  3. Distances become labels through the formula 1 - distance/MAX_DISTANCE, where MAX_DISTANCE=20_037.5 # km represents half of the Earth's circumfrence.

There are other factors that can make cities that are "close together" on the globe "far apart" in reality, due to political borders. Factors like this are not considered in this model, it is only considering geography.

However, for use-cases that involve different measures of distances (perhaps just time-zones, or something that considers the reality of travel), the general principals proven here should be applicable (pick a metric, generate data, train).

A particularly useful addition to the dataset here:

  • airports: they (more/less) have unique codes, and this semantic understanding would be helpful for search engines.
  • aliases for cities: the dataset used for city data (lat/lon) contains a pretty exhaustive list of aliases for the cities. It would be good to generate examples of these with a distance of 0 and train the model on this knowledge.
  • time-zones: encode difference in hours (relative to worst-possible-case) as labels associated with the time-zone formatted-strings.

notes

  • see Makefile for instructions.
  • Generating the data took about 13 minutes (for 3269 US cities) on 8-cores (Intel 9700K), yielding 2,720,278 records (combinations of cities).
  • Training on an Nvidia 3090 FE takes about an hour per epoch with an 80/20 test/train split. Batch size is 16, so there were 136,014 steps per epoch
  • TODO`**: Need to add training / validation examples that involve city names in the context of sentences. _It is unclear how the model performs on sentences, as it was trained only on word-pairs.