Saturday, June 8, 2019

How to use SwiftUI previews for existing project and storyboard

Here is an example code to create a new swift file for an existing project and use the storyboard Main.storyboard and preview the MainViewController instance in the new iOS 13 SwiftUI framework under Xcode 11.

MainViewPreviews.swift    Select all
// MainViewPreviews.swift // FishTagCreator // // Created by javacom on 8 Jun 2019. // import SwiftUI #if DEBUG struct MainViewControllerPreviews : PreviewProvider, UIViewControllerRepresentable { // MARK: PreviewProvider static var previews: some View { MainViewControllerPreviews() } // MARK: UIViewControllerRepresentable typealias UIViewControllerType = MainViewController func makeUIViewController( context: Context ) -> MainViewController { let mainStoryboard: UIStoryboard = UIStoryboard(name: "Main", bundle: nil) let mainViewController: MainViewController = mainStoryboard.instantiateViewController(withIdentifier: "MainController") as! MainViewController return mainViewController } func updateUIViewController( _ uiViewController : MainViewController, context: Context) { } } #endif


It is important to assign a storyboard ID say "MainController" to your MainViewController in Main.storyboard's identity inspector.


Saturday, July 7, 2018

How to make a bootable macOS installer on USB in macOS Mojave

To create a bootable installer of the macOS on USB drive is useful to repair filesystem when another mac is not bootable.

Step 1: A 12GB Flash Drive (at least!) and formated with Mac OS Extended (Journaled) and Choose GUID Partition Map as the Scheme.
The name of the USB Flash Drive will be named as Untitled as default

Step 2: Go to Mac App Store and from the past purchase history, see if there is any Sierra Developer Beta in purchase history and then download it. For unknown reasons, the previous versions of macOS cannot be downloaded in the Mac App Store of Mojave.
There is one more rule : "A Mac can boot NO version OLDER than the version it shipped with". So choose the newer version.


Install_macOS_Sierra_Developer_Beta.rar (4.40GB)
https://mega.nz/#!yk4lSSRQ!WoOSpLf5BSlRR4if3RrbHVHQptG0Tfmw0Bnx4BCrHlA
Install_macOS_High_Sierra.rar (4.86GB)
https://mega.nz/#!WopVXYqQ!LlfKompmLDag20CE6UrsYQmL6e9mKoEgW08bLAvcnbs
Install_macOS_Mojave_Beta.rar (5.22GB)
https://mega.nz/#!qtwxkS7T!7_lG6VhwQLL1Zyc_-s_T5jjVu06vnnvHJTsSTa7fNiI

Step 3: Go to Terminal and type command
shellscript    Select all
# for Sierra Developer beta the command is sudo /Applications/Install\ macOS\ Sierra\ Developer\ Beta.app/Contents/Resources/createinstallmedia --volume /Volumes/Untitled --applicationpath /Applications/Install\ macOS\ Sierra\ Developer\ Beta.app # for High Sierra the command is sudo /Applications/Install\ macOS\ High\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/Untitled --applicationpath /Applications/Install\ macOS\ High\ Sierra.app # for Mojave beta the command is sudo /Applications/Install\ macOS\ Mojave\ Beta.app/Contents/Resources/createinstallmedia --volume /Volumes/Untitled --nointeraction --downloadassets


Step 4: Use the bootable macOS installer USB in a mac and press Option key when boot and use terminal to repair disk or filesystem. The reason to use High Sierra or above is that it can mount the new Apple File System (APFS).

Friday, June 29, 2018

How to install CocoaPods for macOS 10.14 beta


shell script    Select all
# update gem sudo gem update --system # Operation not permitted error # yes do it twice sudo gem update --system # install cocoapods sudo gem install -n /usr/local/bin cocoapods # install dependencies for project cd ~/MyProject pod install # if re-clone CocoaPods repo spec cd ~/.cocoapods/repos/ rm -fr master/ git clone --depth 1 https://github.com/CocoaPods/Specs.git master


Sunday, June 17, 2018

How to install turicreate on macOS 10.14 beta

Install turicreate on macOS 10.14 beta 1
shell script    Select all
# upgrade pip # curl https://bootstrap.pypa.io/get-pip.py | sudo python curl https://bootstrap.pypa.io/get-pip.py | python # install packages sudo pip install requests==2.18.4 turicreate==5.0b1


(1) Test turicreate example - Image Classifier
shell script    Select all
mkdir -p $HOME/MLClassifier cd $HOME/MLClassifier # download dataset and cleanup curl -L -o dataset.zip https://drive.google.com/uc?id=1ZLigrn7YcETalcj2qK6UqXceDdOV3244&export=download unzip dataset.zip rm -fr __MACOSX; rm dataset/.DS_Store dataset/*/.DS_Store # create python script cat > classifier.py << 'EOF' import turicreate as turi # load images from dataset folder url = "dataset/" data = turi.image_analysis.load_images(url) # define image categories data["foodType"] = data["path"].apply(lambda path: "Rice" if "rice" in path else "Soup") # create sframe data.save("rice_or_soup.sframe") # preview dataset data.explore() # load sframe dataBuffer = turi.SFrame("rice_or_soup.sframe") # create training data using 90% of dataset trainingBuffers, testingBuffers = dataBuffer.random_split(0.9) # create model model = turi.image_classifier.create(trainingBuffers, target="foodType", model="squeezenet_v1.1", max_iterations=100) # Alternate model use ResNet-50 # model = turi.image_classifier.create(trainingBuffers, target="foodType", model="resnet-50") # evaluate model evaluations = model.evaluate(testingBuffers) print evaluations["accuracy"] # save model model.save("rice_or_soup.model") model.export_coreml("RiceSoupClassifier.mlmodel") EOF #run script python classifier.py


(2) Test turicreate example - Logistic Regression
shell script    Select all
mkdir -p $HOME/LGClassifier cd $HOME/LGClassifier # create python script cat > classifier.py << 'EOF' import turicreate as turi data = turi.SFrame('http://static.turi.com/datasets/regression/yelp-data.csv') data['is_good'] = data['stars'] >= 3 # create sframe data.save("yelp.sframe") # preview dataset #data.show() # load sframe dataBuffer = turi.SFrame("yelp.sframe") # create training data using 80% of dataset train_data, test_data = dataBuffer.random_split(0.8) # create model model=turi.logistic_classifier.create(train_data, target='is_good', features = ['user_avg_stars', 'business_avg_stars', 'user_review_count', 'business_review_count', 'city', 'categories_dict'], max_iterations=200) print model # save predictions predictions = model.classify(test_data) print predictions # evaluate model evaluations = model.evaluate(test_data) print "Accuracy : %s" % evaluations["accuracy"] print "Confusion Matrix : \n%s" % evaluations["confusion_matrix"] EOF #run script python classifier.py


(3) Some data manipulation tips when preparing training data
shell script    Select all
# remove the quotes (replace the number with the quotes with the number without them) in csv file, typically "save as CSV" from excel file. # for example, "222,267.87","455,365.44",... convert to 222267.87,455365.44,... #In shell script cat exceldata.csv | perl -p -e 's/,(?=[\d,.]*\d")//g and s/"(\d[\d,.]*)"/\1/g' > dataset.csv # use map, lambda and zip functions when convert and compute numeric data from 2 data columns #In python script import math data['rate'] = map(lambda (x,y): 0 if x is None or y is None else (0 if math.isnan(x) or math.isnan(y) or math.isinf(y) or x==0 else (999999 if math.isinf(x) or y==0 else 999999 if x/y > 999999 else x/y)) , zip(data['OS'], data['Total Amount'])) # replace training data when values are inf(infinity) or nan(Not A Number) in 'amount' column #In python script import math train_data['amount'] = train_data['amount'].apply(lambda x: 0 if math.isnan(x) else x) train_data['amount'] = train_data['amount'].apply(lambda x: 999 if math.isinf(x) else x) # or use nested if else #In python script import math train_data['amount'] = train_data['amount'].apply(lambda x: 0 if math.isnan(x) else (999 if math.isinf(x) else x )) print train_data['amount'].summary() # remove rows in training data with inf(infinity) or nan(Not A Number) values in 'amount' column #In python script import math train_data = train_data[train_data['amount'].apply(lambda x: 0 if math.isinf(x) or math.isnan(x) else 1)] # SFrame methods but beware, some of the methods are not working https://apple.github.io/turicreate/docs/api/generated/turicreate.SFrame.html # Other SFrame data manipulation examples https://github.com/apple/turicreate/blob/master/userguide/sframe/data-manipulation.md


(4) Some data examination tips
shell script    Select all
# summary print train_data['amount'].summary() # crosstab import pandas as pd pd.crosstab(data["Rating"], data["is_bad"], margins=True) # custom frequency count for 'amount' column import pandas as pd pd.crosstab(train_data['amount'].apply(lambda x: " 0-10" if x <=10 else ("10-20" if x <=20 else ("20-30" if x <=30 else ("30-40" if x <=30 else ("40-50" if x <=50 else ">50"))))), "Count")


Saturday, June 9, 2018

Playground examples for XCode 10 Beta 1

Playground Support for iOS12 and Swift 4.2
iOS.playground    Select all
import UIKit import PlaygroundSupport //: **Markup** //: ### Define UIView class MyView : UIView { @objc public func changeTitle(_ sender:UIButton!) { sender.setTitle("Welcome to WWDC2018", for: []) } } let myView = MyView(frame: CGRect(x:0, y:0, width:500, height:500)) var button = UIButton(type: .system) button.frame = CGRect(x:100, y:100, width:300, height:200) button.setTitle("Hi Press me!", for: []) button.tintColor = .blue button.setTitleColor(.orange, for: []) button.addTarget(myView, action: #selector(MyView.changeTitle(_:)), for: .touchUpInside) myView.addSubview(button) PlaygroundPage.current.liveView = myView




Playground Icon Drawings in iOS and Swift 4.2
macOS.playground    Select all
import UIKit //: Define IconView class IconView: UIView { override func draw(_ rect: CGRect) { drawRawBackgroundWithBaseColor(strokeColor: UIColor.orange, backgroundRectangle: self.bounds) let textAttributes: [NSAttributedString.Key : Any] = [ NSAttributedString.Key.foregroundColor: UIColor.red, NSAttributedString.Key.font: UIFont.systemFont(ofSize: 32.0)] let FString: String = "Hello World" let distanceX: CGFloat = -12.0 let distanceY: CGFloat = 0.0 let centerX = self.bounds.midX let centerY = self.bounds.midY FString.draw(at: CGPoint(x:centerX+distanceX, y:centerY+distanceY), withAttributes: textAttributes) } } func drawRawBackgroundWithBaseColor(strokeColor: UIColor, backgroundRectangle:CGRect) { let lineWidth = backgroundRectangle.width/36.0 let cornerRadius = backgroundRectangle.width/16.0 let tileRectangle = backgroundRectangle.insetBy(dx: lineWidth/2.0, dy: lineWidth/2.0) // Stroke Drawing let strokePath = UIBezierPath(roundedRect:tileRectangle, cornerRadius:cornerRadius) strokeColor.setStroke() strokePath.lineWidth = lineWidth strokePath.stroke() // Draw an ellipse let ovalPath = UIBezierPath(ovalIn: backgroundRectangle.insetBy(dx: lineWidth*1.5, dy: lineWidth*1.5)) UIColor.blue.setStroke() ovalPath.lineWidth = lineWidth ovalPath.stroke() let context:CGContext = UIGraphicsGetCurrentContext()! context.setFillColor(UIColor.green.cgColor) context.addRect(CGRect(x: 100.0, y: 100.0, width: 60.0, height: 60.0)) context.fillPath() } //: Instantiate the UIView let rect = CGRect(x: 0.0, y: 0.0, width: 420.0, height: 320.0) let icon = IconView(frame: rect) icon.backgroundColor = UIColor.clear




CreateML for macOS 10.14 Beta 1 (requires macOS 10.14 Mojave)
macOS.playground    Select all
import Cocoa import CreateML //: Specify Data /* Input as CSV mycsv.csv: beds,baths,squareFt, price 2,2,2000,400000 4,3,2500,500000 3,2,1800,450000 3,2,1500,300000 let houseData = try MLDataTable(contentsOf: URL(fileURLWithPath: "mycsv.csv")) */ //: Input as dictionary let mydata : [String: MLDataValueConvertible] = [ "beds": [2,4,3,3], "baths": [2,3,2,2], "squareFt": [2000,2500,1800,1500], "price": [400000,500000,450000,300000] ] let houseData = try MLDataTable(dictionary:mydata) let (trainingData, testData) = houseData.randomSplit(by: 0.8, seed: 0) //: Create Model let pricer = try MLRegressor(trainingData: houseData, targetColumn: "price") //: Evaluate Model let evalator = pricer.evaluation(on: testData) print(pricer) //: Save Model try pricer.write(to:URL(fileURLWithPath: "HousePricer.mlmodel"))




Tuesday, July 4, 2017

How to fetch WWDC 2017/2018/2019 Video Subtitle to SRT format

Create and Run this script wwdc_fetch_srt.sh to fetch WWDC2019 subtitle
Reference : https://github.com/wsvn53/wwdc2016-subtitles

wwdc_fetch_srt.sh    Select all
#!/bin/sh # @Author: Ethan # @Date: 2016-06-22 14:10:53 # @Last Modified by: javacom # @Last Modified time: 2019-06-06 WWDC_YEAR=2019; # change to 2017/2018 and also works for WWDC2017 or WWDC2018 WWDC_SESSION_PREFIX=https://developer.apple.com/videos/play/wwdc$WWDC_YEAR; WWDC_LOCAL_DIR=$(basename $WWDC_SESSION_PREFIX); detect_video_m3u8 () { local session_url=$WWDC_SESSION_PREFIX/$SESSION_ID/; local session_html=$(curl -s $session_url); local video_url=$(echo "$session_html" | grep .m3u8 | grep $SESSION_ID | head -n1 | sed "s#.*\"\(https://.*m3u8\)\".*#\1#"); echo "$session_html" | grep .mp4 | grep $SESSION_ID | sed "s#.*\"\(https://.*mp4\).*\".*#\1#" | while read mp4_url; do local mp4_filename=$(basename $mp4_url | cut -d. -f1); local srt_filename=$mp4_filename.srt; echo "> Subtitle local: $WWDC_LOCAL_DIR/$srt_filename" >&2; > $WWDC_LOCAL_DIR/$srt_filename; done echo "$video_url"; echo "> Video: $video_url" >&2; } detect_subtitle_m3u8 () { local video_url=$1; local subtitle_uri=$(curl -s $video_url | grep "LANGUAGE=\"eng\"" | sed "s#.*URI=\"\(.*\)\"#\1#"); local subtitle_url=$subtitle_uri; [[ "$subtitle_uri" != http* ]] && { subtitle_url=$(dirname $video_url)/$subtitle_uri; } echo "$subtitle_url"; echo "> Subtitle: $subtitle_url" >&2; } download_subtitle_contents () { local subtitle_url=$1; echo "> Downloading... " local subtitle_base_url=$(dirname $subtitle_url); curl -s $subtitle_url | grep "webvtt" | while read webvtt; do local subtitle_webvtt=$subtitle_base_url/$webvtt; #echo "- get $subtitle_webvtt"; local subtitle_content=$(curl -s $subtitle_webvtt); ls $WWDC_LOCAL_DIR/"$SESSION_ID"_* | while read srt_file; do echo "$subtitle_content" >> $srt_file; done done } main () { [ ! -d $WWDC_LOCAL_DIR ] && { mkdir $WWDC_LOCAL_DIR; } curl -s $WWDC_SESSION_PREFIX | grep /videos/play/wwdc$WWDC_YEAR | sed "s#.*/videos/play/wwdc$WWDC_YEAR/\([0-9]\{3\}\).*#\1#" | sort | uniq | while read SESSION_ID; do #echo "SESSION_ID is" $SESSION_ID local video_url=$(detect_video_m3u8 $SESSION_ID); local subtitle_url=$(detect_subtitle_m3u8 $video_url); download_subtitle_contents $subtitle_url; done } main;




Run this shell script to format as SRT subtitle

shellscript.sh    Select all
WWDC_YEAR=2019; # change to 2017/2018 and also works for WWDC2017 or WWDC2018 cd wwdc$WWDC_YEAR mkdir -p sd mkdir -p hd for i in ???_sd_*.srt; do sed -e '/WEBVTT/d;/X-TIMESTAMP/d;' $i | awk '/^[0-9]{2}:[0-9]{2}:/ {seen[$0]++; skipduplicated=0} {if (seen[$0]>1) skipduplicated=1; if (!skipduplicated) print $0}' | awk -v RS="" '{gsub("\n", "-Z"); print}' | awk '$0 !~/^WEB/ {print $0}' | uniq | awk '{printf "\n%s-Z%s", NR,$0 }' | awk -v ORS="\n\n" '{gsub("-Z", "\n"); print}' | sed -e 's/.A:middle$//g;s/&gt;/>/g;s/&lt;/</g;1,2d;' > sd/$i; done for i in ???_hd_*.srt; do sed -e '/WEBVTT/d;/X-TIMESTAMP/d;' $i | awk '/^[0-9]{2}:[0-9]{2}:/ {seen[$0]++; skipduplicated=0} {if (seen[$0]>1) skipduplicated=1; if (!skipduplicated) print $0}' | awk -v RS="" '{gsub("\n", "-Z"); print}' | awk '$0 !~/^WEB/ {print $0}' | uniq | awk '{printf "\n%s-Z%s", NR,$0 }' | awk -v ORS="\n\n" '{gsub("-Z", "\n"); print}' | sed -e 's/.A:middle$//g;s/&gt;/>/g;s/&lt;/</g;1,2d;' > hd/$i; done




Run this script wwdc_fetch_mp4.sh to download all mp4 (HD and SD) videos

wwdc_fetch_mp4.sh    Select all
#!/bin/sh # @Last Modified by: javacom # @Last Modified time: 2019-06-06 WWDC_YEAR=2019; # change to 2017/2018 and also works for WWDC2017 or WWDC2018 WWDC_SESSION_PREFIX=https://developer.apple.com/videos/play/wwdc$WWDC_YEAR; WWDC_LOCAL_DIR=$(basename $WWDC_SESSION_PREFIX); download_mp4_video () { local session_url=$WWDC_SESSION_PREFIX/$SESSION_ID/; local session_html=$(curl -s $session_url); local video_url=$(echo "$session_html" | grep .m3u8 | grep $SESSION_ID | head -n1 | sed "s#.*\"\(https://.*m3u8\)\".*#\1#"); echo "$session_html" | grep .mp4 | grep $SESSION_ID | sed "s#.*\"\(https://.*mp4\).*\".*#\1#" | while read mp4_url; do local mp4_filename=$(basename $mp4_url); if [ -e $WWDC_LOCAL_DIR/$mp4_filename ] then echo "> MP4 already existed : $WWDC_LOCAL_DIR/$mp4_filename" >&2; echo "> To resume broken download use curl -C - --connect-timeout 1200 -o $WWDC_LOCAL_DIR/$mp4_filename $mp4_url" >&2; echo " " >&2; else echo "> MP4 Downloading... : $mp4_url" >&2; curl --connect-timeout 120 -o $WWDC_LOCAL_DIR/$mp4_filename $mp4_url fi done } main () { [ ! -d $WWDC_LOCAL_DIR ] && { mkdir $WWDC_LOCAL_DIR; } curl -s $WWDC_SESSION_PREFIX | grep /videos/play/wwdc$WWDC_YEAR | sed "s#.*/videos/play/wwdc$WWDC_YEAR/\([0-9]\{3\}\).*#\1#" | sort | uniq | while read SESSION_ID; do download_mp4_video $SESSION_ID; done } main;


One liner version wwdc2019_fetch_mp4.sh to download all mp4 videos

wwdc2019_fetch_mp4.sh    Select all
# one liner for hd videos download # change to 2017/2018 and also works for WWDC2017 or WWDC2018 WWDCYEAR="wwdc2019"; for i in `curl -s https://developer.apple.com/videos/$WWDCYEAR/ | grep -o '<a href="/videos/play/'"$WWDCYEAR"'/[0-9]*' | cut -d '"' -f2 | sort | uniq`; do video_url=$(curl -s https://developer.apple.com${i} | grep -o 'http.*_hd_.*.mp4'); if [ ! -z "$video_url" ]; then mp4_filename=$(basename $video_url); if [ -e $mp4_filename ]; then echo "skipping $mp4_filename"; else echo "Downloading ... $mp4_filename";curl --connect-timeout 120 -O $video_url; fi; fi; done # one liner for sd videos download WWDCYEAR="wwdc2019"; for i in `curl -s https://developer.apple.com/videos/$WWDCYEAR/ | grep -o '<a href="/videos/play/'"$WWDCYEAR"'/[0-9]*' | cut -d '"' -f2 | sort | uniq`; do video_url=$(curl -s https://developer.apple.com${i} | grep -o 'http.*_sd_.*.mp4'); if [ ! -z "$video_url" ]; then mp4_filename=$(basename $video_url); if [ -e $mp4_filename ]; then echo "skipping $mp4_filename"; else echo "Downloading ... $mp4_filename";curl -O $video_url; fi; fi; done




Wednesday, June 7, 2017

How to train dataset in python and convert to CoreML model for iOS11

Reference http://machinelearningmastery.com/machine-learning-in-python-step-by-step/

Environment : macOS 10.12.4
matplotlib==2.0.0
numpy==1.12.1
pandas==0.19.2
scikit-learn==0.18.1
scipy==0.19.0
six==1.10.0
sklearn==0.18.1
coremltools==0.3.0
protobuf==3.3.0

Upgrade the pip and install the following python packages
shellscript.sh    Select all
pip install --upgrade pip sudo -H pip install numpy scipy matplotlib pandas sklearn coremltools protobuf



Convert to Core ML Run the following python code to show machine learning in python step by step and finally generate iris_lr.mlmodel
iris_learn.py    Select all
#!/usr/bin/env python # Check the versions of libraries # Python version import sys print('Python: {}'.format(sys.version)) # scipy import scipy print('scipy: {}'.format(scipy.__version__)) # numpy import numpy print('numpy: {}'.format(numpy.__version__)) # matplotlib import matplotlib print('matplotlib: {}'.format(matplotlib.__version__)) # pandas import pandas print('pandas: {}'.format(pandas.__version__)) # scikit-learn import sklearn print('sklearn: {}'.format(sklearn.__version__)) # Load libraries import pandas from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC # Load dataset url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = pandas.read_csv(url, names=names) # shape print(dataset.shape) # head print(dataset.head(20)) # descriptions print(dataset.describe()) # class distribution print(dataset.groupby('class').size()) # box and whisker plots dataset.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False) plt.suptitle("Box and Whisker Plots for inputs") plt.show() # histograms dataset.hist() plt.suptitle('Histograms for inputs') plt.show() # scatter plot matrix scatter_matrix(dataset) plt.suptitle('Scatter Plot Matrix for inputs') plt.show() # Split-out validation dataset array = dataset.values X = array[:,0:4] Y = array[:,4] validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed) # Test options and evaluation metric seed = 7 scoring = 'accuracy' # Spot Check Algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn results = [] names = [] for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # Compare Algorithms fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() # Make predictions on validation dataset knn = KNeighborsClassifier() knn.fit(X_train, Y_train) predictions = knn.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) print("Make predictions on LogisticRegression Model") model = LogisticRegression() model.fit(X_train, Y_train) predictions = model.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) # print prediction results on test data for i, prediction in enumerate(predictions): print 'Predicted: %s, Target: %s %s' % (prediction, Y_validation[i], '' if prediction==Y_validation[i] else '(WRONG!!!)') #convert and save scikit.learn model #support LogisticRegression of scikit.learn print("Convert LogisticRegression Model to coreml model") import coremltools coreml_model = coremltools.converters.sklearn.convert(model, ["sepal-length", "sepal-width", "petal-length", "petal-width"], "class") #set model metadata coreml_model.author = 'Author' coreml_model.license = 'BSD' coreml_model.short_description = 'LogisticRegression on Iris flower data set' #set features description manually coreml_model.input_description['sepal-length'] = 'Sepal Length in centimetres' coreml_model.input_description['sepal-width'] = 'Sepal Width in centimetres' coreml_model.input_description['petal-length'] = 'Petal Length in centimetres' coreml_model.input_description['petal-width'] = 'Petal Width in centimetres' #set the ouput description coreml_model.output_description['class'] = 'Distinguish the species' #save the model coreml_model.save('iris_lr.mlmodel') from coremltools.models import MLModel model = MLModel('iris_lr.mlmodel') #get the spec of the model print(model.get_spec())


Download Xcode 9 beta and the sample code from Apple

https://docs-assets.developer.apple.com/published/51ff0c1668/IntegratingaCoreMLModelintoYourApp.zip
Modify it and add the model to the xcode project


Try the new refactoring tool in Xcode 9. It is amazing.


Train data using Neural Network Model Keras
Reference : http://machinelearningmastery.com/5-step-life-cycle-neural-network-models-keras/

shellscript.sh    Select all
# download training data curl -O http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data # install and activate virtual environment and install necessary python packages # use deactivate to stop the python virtual env sudo -H pip install --upgrade virtualenv virtualenv --system-site-packages ~/tensorflow source ~/tensorflow/bin/activate # macOS, CPU only non-optimised, Python 2.7: # https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.1.0-py2-none-any.whl # macOS, GPU enabled, Python 2.7: # https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow_gpu-1.1.0-py2-none-any.whl # or find optimised wheel files from the community https://github.com/yaroslavvb/tensorflow-community-wheels/issues # this optimised one (SSE4.1,SSE4.2,AVX,AVX2,FMA) works for Python 2.7 macOS 10.12 Tensoflow 1.1.0 CPU https://github.com/fdalvi/tensorflow-builds # this one works for GeForce GT 650M GPU and CPU (SSE4.2, AVX) and CUDA 8.0, and cuDNN v5.1 https://github.com/bodak/tensorflow-wheels/releases/tag/v1.1.0_27 # instruction to build your own python package https://ctmakro.github.io/site/on_learning/tf1c.html # suppose, install the official non-optimised wheel file as below pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.1.0-py2-none-any.whl pip install coremltools protobuf pip install keras==1.2.2 h5py

Convert to Core ML Run the following python code in virtual environment (tensorflow) to generate pima_keras.mlmodel
keras_learn.py    Select all
#!/usr/bin/env python from keras.models import Sequential from keras.layers import Dense import numpy # fix random seed for reproducibility numpy.random.seed(7) # load pima indians dataset #dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",") dataset = numpy.loadtxt("pima-indians-diabetes.data", delimiter=",") # split into input (X) and output (Y) variables X = dataset[:,0:8] Y = dataset[:,8] # create model model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) model.add(Dense(1, activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit the model #model.fit(X, Y, epochs=150, batch_size=10) model.fit(X, Y, 10, 150) # parameters change to keras 1.2.2 # evaluate the model scores = model.evaluate(X, Y) print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100)) #convert and save keras model model.save('pima.h5') print("Convert Model to coreml model") import coremltools coreml_model = coremltools.converters.keras.convert('pima.h5') #set model metadata coreml_model.author = 'Author' coreml_model.license = 'BSD' coreml_model.short_description = 'pima-indians-diabetes' #save the model coreml_model.save('pima_keras.mlmodel') from coremltools.models import MLModel mlmodel = MLModel('pima_keras.mlmodel') #get the spec of the model print(mlmodel.get_spec())


Note: coremltools require python 2.7 (not for 3.x) and supports keras==1.2.2 with Tenorflow (1.0.x, 1.1.x) only. Tenorflow_gpu requires Nvidia Cuda 8.0 and cuDNN v5.1 (which also requires macOS 10.11/10.12) but recent models of Mac are all bundled AMD GPUs. Unless you could get an old Mac Pro with upgraded Nvidia GPU with at least 4 GB of video RAM, it is better to stay with Mac CPU i7 or get a Linux machine for data training purpose only.

Hardware reference for Linux : https://www.oreilly.com/learning/build-a-super-fast-deep-learning-machine-for-under-1000

For Windows PC, tensorflow/tensorflow_gpu is only available for Python 3.5 and 64 bits only as below. As current coremltools keras convertors are not compatible with python 3.5, so direct conversion is not available in PC yet.
https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.1.0-cp35-cp35m-win_amd64.whl
https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.1.0-cp35-cp35m-win_amd64.whl



keras-inception-test Run the following python code in virtual environment (tensorflow) to test Keras Inceptionv3 model. This will download the trained Inception V3 weights from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/inception_v3_weights_tf_dim_ordering_tf_kernels.h5
shellscript.sh    Select all
git clone git://github.com/vml-ffleschner/coremltools-keras-inception-test cd coremltools-keras-inception-test/ # based on the created virtualenv in ~/tensorflow as above source ~/tensorflow/bin/activate # additional installation of packages pip install olefile pillow #Add coreml_model.author = 'Author' coreml_model.license = 'BSD' coreml_model.short_description = 'Image InceptionV3 model' coreml_model.save('Inceptionv3.mlmodel') print("CoreML model file Created") #After #print("CoreML Converted") #in playground.py # note : coreml_model.predict requires macOS 10.13 High Sierra python playground.py


Install tensorflow 1.1.0 library for Java is here
shellscript.sh    Select all
curl -O https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.1.0.jar curl -O https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-darwin-x86_64-1.1.0.tar.gz
# install tar xzvf libtensorflow_jni-cpu-darwin-x86_64-1.1.0.tar.gz -C ./jni # compile and run HelloTF javac -cp libtensorflow-1.1.0.jar HelloTF.java java -cp libtensorflow-1.1.0.jar:. -Djava.library.path=./jni HelloTF