## All-in-one feedback

###### Python

Bellow are a couple sample projects from ProjectEuler.net/

###### Web Scrapping

Bellow are a few web scrapers I made using Python and Selenium

###### Websites

The majority of projects made with React.js but have experience with plain HTML, CSS, and JavaScript as well

###### Android

Created using React Native (created by Facebook) and JavaScript.

__ProjectEuler.net__

## Problem 1: Multiples of 3 or 5

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. Answer: 233168

```
mult = []
for i in range(1000):
if i % 3 == 0:
mult.append(i)
elif i % 5 == 0:
mult.append(i)
#print(mult)
sum_mult = sum(mult)
print(sum_mult)
```

## Problem 2: Even Fibonacci numbers

Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. Answer: 4613732

```
prev_num = 1
n = 1
next_num = 1
fib_even_nums = []
while n < 4000000:
if n % 2 == 0:
fib_even_nums.append(n)
next_num = n + prev_num
prev_num = n
n = next_num
print(fib_even_nums)
sum_fib = sum(fib_even_nums)
print(sum_fib)
```

## Problem 3: Largest prime factor

The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? Answer: 6857

```
mult = []
for i in range(1000):
if i % 3 == 0:
mult.append(i)
elif i % 5 == 0:
mult.append(i)
#print(mult)
sum_mult = sum(mult)
print(sum_mult)
```

__Web Scraping__

## Selenium Web Scraper

Used this python web scraper to pull all 365 days worth of content for the Daily Harvard app.

```
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
import json
path = "C:\\Users\\spencer.craigie\\OneDrive\\Documents\\Coding Projects\\Python\\Web Scraping\\chromedriver.exe"
driver = webdriver.Chrome(path)
#driver.get(new_website)
#----------------------------------------#
new_website = "https://dailyharvard.wordpress.com/2021/06/03/december-31/"
d = 365
day = {}
while d != 0:
driver.get(new_website)
title = driver.find_element_by_class_name("entry-title")
text = driver.find_element_by_class_name("entry-content")
elems = driver.find_elements_by_css_selector(".nav-previous [href]")
links = [elem.get_attribute('href') for elem in elems]
day[d] = []
day[d].append({
"title" : title.text,
"text" : text.text,
"link" : links
})
with open ('content.json', 'w') as cont:
json.dump(day, cont, indent=2)
new_website = links[0]
print(new_website)
d -= 1
```

__Other__

## changeFile.py

Each day of the Harvard app is on a separate page. Because of this, it becomes very difficult to make the same change to all the days at the same time. I made this program to rewrite all the content at once.

```
for i in range (28):
number = i + 1
with open('02-{number}', 'r') as original: data = original.read()
with open('02-{number}', 'w') as modified: modified.write("import React from 'react';\nimport {StyleSheet, Text, View, SafeAreaView, Button, ScrollView, from 'react-native';\nfunction Feb{number}({ navigation }) {return (\n<ScrollView>\n" + data +"\n</ScrollView>\n);\n}\nexport default Feb{number};")
```