-->

2020年4月30日木曜日

エラー"No viable overloaded ="の解決方法

環境
  • Xcode 11.4.1
  • Swift 5
  • iOS 13.4.1
RealmSwiftを使う際にエラー"No viable overloaded ="が出た場合の対処方法


Realmのバージョンを指定して再インストールする
Podファイルを開き、次のように書き換える
pod 'RealmSwift', '~> 3.18.0'
ターミナルで以下を実行
pod repo update
pod install

XcodeのCreateMLを使ってMLモデルファイルを作成

今回は野菜の画像認識を実装したいので、野菜の画像データを集めてMLモデルファイルを作成する。
環境
  • Xcode 11.4.1
  • Swift 5
  • iOS 13.4.1

参考にしたサイト

1.Open Image Dataset のすべてのデータをダウンロードするには重すぎるため、指定したタグの画像のみダウンロードする。
まず、Explore画面にて、画像のタグを確認する
2.次に「OIDv4-Tooklit」を使って画像をダウンロードする。
Gitはこちらhttps://github.com/EscVM/OIDv4_ToolKit
レポジトリをクローン
git clone https://github.com/EscVM/OIDv4_ToolKit.git
環境設定
cd OIDv4_ToolKit
pip install -r requirements.txt
main.py を実行
python3 main.py downloader --classes Carrot --type_csv train
--classesのあとにダウンロード対象のタグを指定する。上記の例の場合はCarrot(人参)
--type_csvのあとに訓練データ(train)、検証データ(validation)、テストデータ(test)を指定できる。すべての場合はall。
ダウンロードしたファイルのフォルダ構成は以下
train
├─Bell Pepper
├─Cabbage
├─Carrot
├─Potato
└─Tomato
test
├─Bell Pepper
├─Cabbage
├─Carrot
├─Potato
└─Tomato
3.MLファイルを作成する。
xcodeのOpen Develper Tool > CreateMLを起動
NewDocumentを選択したのち、Image Classifierを選択
ProjectName等を記入
Training DataとTesting Dataそれぞれに画像が入ったtrainフォルダ、testフォルダをドラッグ&ドロップ

最後にTrainボタンを押す
古いMacですが、Training Dataが2,836枚、Testing Dataが566枚で15分くらいで完了
作成したOutputファイルはドラッグ&ドロップで取り出せます


カメラ画像からリアルタイムで画像認識

環境
  • Xcode 11.4.1
  • Swift 5
  • iOS 13.4.1

参考にしたサイト

SceneDelegate.swift
import UIKit

class SceneDelegate: UIResponder, UIWindowSceneDelegate {

    var window: UIWindow?

    func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
        // Use this method to optionally configure and attach the UIWindow `window` to the provided UIWindowScene `scene`.
        // If using a storyboard, the `window` property will automatically be initialized and attached to the scene.
        // This delegate does not imply the connecting scene or session are new (see `application:configurationForConnectingSceneSession` instead).
        //guard let _ = (scene as? UIWindowScene) else { return }
        
        guard let windowScene = (scene as? UIWindowScene) else { return }
        let window = UIWindow(windowScene: windowScene)
        self.window = window
        window.rootViewController = ViewController(nibName: nil, bundle: nil)
        window.makeKeyAndVisible()
    }

    func sceneDidDisconnect(_ scene: UIScene) {
        // Called as the scene is being released by the system.
        // This occurs shortly after the scene enters the background, or when its session is discarded.
        // Release any resources associated with this scene that can be re-created the next time the scene connects.
        // The scene may re-connect later, as its session was not neccessarily discarded (see `application:didDiscardSceneSessions` instead).
    }

    func sceneDidBecomeActive(_ scene: UIScene) {
        // Called when the scene has moved from an inactive state to an active state.
        // Use this method to restart any tasks that were paused (or not yet started) when the scene was inactive.
    }

    func sceneWillResignActive(_ scene: UIScene) {
        // Called when the scene will move from an active state to an inactive state.
        // This may occur due to temporary interruptions (ex. an incoming phone call).
    }

    func sceneWillEnterForeground(_ scene: UIScene) {
        // Called as the scene transitions from the background to the foreground.
        // Use this method to undo the changes made on entering the background.
    }

    func sceneDidEnterBackground(_ scene: UIScene) {
        // Called as the scene transitions from the foreground to the background.
        // Use this method to save data, release shared resources, and store enough scene-specific state information
        // to restore the scene back to its current state.
    }


}
UIButtonの場合
 
import UIKit
import AVFoundation
import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
    
    let label: UILabel = {
        let label = UILabel()
        label.textColor = .white
        label.translatesAutoresizingMaskIntoConstraints = false
        label.text = "Label"
        label.font = label.font.withSize(30)
        return label
    }()
    
    override func viewDidLoad() {
        // call the parent function
        super.viewDidLoad()
        
        // establish the capture session and add the label
        setupCaptureSession()
        view.addSubview(label)
        setupLabel()
    }
    
    override func didReceiveMemoryWarning() {
        // call the parent function
        super.didReceiveMemoryWarning()
        
        // Dispose of any resources that can be recreated.
    }
    
    func setupCaptureSession() {
        // create a new capture session
        let captureSession = AVCaptureSession()
        
        // find the available cameras
        let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices
        
        do {
            // select a camera
            if let captureDevice = availableDevices.first {
                captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
            }
        } catch {
            // print an error if the camera is not available
            print(error.localizedDescription)
        }
        
        // setup the video output to the screen and add output to our capture session
        let captureOutput = AVCaptureVideoDataOutput()
        captureSession.addOutput(captureOutput)
        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.frame = view.frame
        view.layer.addSublayer(previewLayer)
        
        // buffer the video and start the capture session
        captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
        captureSession.startRunning()
    }
    
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // load our CoreML Pokedex model
        guard let model = try? VNCoreMLModel(for: MobileNetV2().model) else { return }
        // run an inference with CoreML
        let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in
            // grab the inference results
            guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }
            
            // grab the highest confidence result
            guard let Observation = results.first else { return }
            
            // create the label text components
            let predclass = "\(Observation.identifier)"
            let predconfidence = String(format: "%.02f%", Observation.confidence * 100)
            // set the label text
            DispatchQueue.main.async(execute: {
                self.label.text = "\(predclass) \(predconfidence)"
            })
        }
        
        // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
        // Applications generating frames, compressing or decompressing video, or using Core Image
        // can all make use of Core Video pixel buffers
        guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
        
        // execute the request
        try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
    }
    
    func setupLabel() {
        // constrain the label in the center
        label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

        // constrain the the label to 50 pixels from the bottom
        label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
    }
}