To start using the camera in your Activity/Fragment, implement the PassioCameraViewProvider interface. By implementing this interface the SDK will use that component as the lifecycle owner of the camera (when that component calls onPause() the camera will stop) and also will provide the Context in order for the camera to start. The component implementing the interface must have a PreviewView in its view hierarchy.
Start by adding the PreviewView to your view hierarchy. Go to your layout.xml and add the following.
This approach is more manual but gives you more flexibility. You need to implement the PassioCameraViewProvider interface and supply the needed LifecycleOwner and the PreviewView added in the initial step.
class MainActivity : AppCompatActivity(), PassioCameraViewProvider {
override fun requestCameraLifecycleOwner(): LifecycleOwner {
return this
}
override fun requestPreviewView(): PreviewView {
return myPreviewView
}
}
After the user has granted permission to use the camera, start the SDK camera
override fun onStart() {
super.onStart()
if (!hasPermissions()) {
ActivityCompat.requestPermissions(
this,
REQUIRED_PERMISSIONS,
REQUEST_CODE_PERMISSIONS
)
return
} else {
PassioSDK.instance.startCamera(this /*reference to the PassioCameraViewProvider*/)
}
}
Using the PassioCameraFragment
PassioCameraFragment is an abstract class that handles Camera permission at runtime as well as starting the Camera process of the SDK. To use the PassioCameraFragment simply extend it in your own fragment and supply the PreviewView that has been added to the view hierarchy in the previous step.
class MyFragment : PassioCameraFragment() {
override fun getPreviewView(): PreviewView {
return myPreviewView
}
override fun onCameraReady() {
// Proceed with initializing the recognition session
}
override fun onCameraPermissionDenied() {
// Explain to the user that the camera is needed for this feature to
// work and ask for permission again
}
}
import {
PassioSDK,
DetectionCameraView,
} from '@passiolife/nutritionai-react-native-sdk-v2';
To show the live camera preview, add the DetectionCameraView to your view
The SDK can detect 3 different categories: VISUAL, BARCODE and PACKAGED. The VISUAL recognition is powered by Passio's neural network and is used to recognize over 4000 food classes. BARCODE, as the name suggests, can be used to scan a barcode located on a branded food. Finally, PACKAGED can detect the name of a branded food. To choose one or more types of detection, a FoodDetectionConfiguration object is defined and the corresponding fields are set. The VISUAL recognition works automatically.
The type of food detection is defined by the FoodDetectionConfiguration object. To start the Food Recognition process a FoodRecognitionListener also has to be defined. The listener serves as a callback for all the different food detection processes defined by the FoodDetectionConfiguration. When the app is done with food detection, it should clear out the listener to avoid any unwanted UI updates.
Implement the delegate FoodRecognitionDelegate:
extension PassioQuickStartViewController: FoodRecognitionDelegate {
func recognitionResults(candidates: FoodCandidates?,
image: UIImage?) {
if let candidates = candidates?.barcodeCandidates,
let candidate = candidates.first {
print("Found barcode: \(candidate.value)")
}
if let candidates = candidates?.packagedFoodCandidates,
let candidate = candidates.first {
print("Found packaged food: \(candidate.packagedFoodCode)")
}
if let candidates = candidates?.detectedCandidates,
let candidate = candidates.first {
print("Found detected food: \(candidate.name)")
}
}
}
Add the method startFoodDetection()
func startFoodDetection() {
setupPreviewLayer()
let config = FoodDetectionConfiguration(detectVisual: true,
volumeDetectionMode: .none,
detectBarcodes: true,
detectPackagedFood: true)
passioSDK.startFoodDetection(detectionConfig: config,
foodRecognitionDelegate: self) { ready in
if !ready {
print("SDK was not configured correctly")
}
}
}
In viewWillAppear request authorization to use the camera and start the recognition:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
if AVCaptureDevice.authorizationStatus(for: .video) == .authorized {
startFoodDetection()
} else {
AVCaptureDevice.requestAccess(for: .video) { (granted) in
if granted {
DispatchQueue.main.async {
self.startFoodDetection()
}
} else {
print("The user didn't grant access to use camera")
}
}
}
}
private val foodRecognitionListener = object : FoodRecognitionListener {
override fun onRecognitionResults(
candidates: FoodCandidates,
image: Bitmap?,
) {
// Handle result
}
}
Using the listener and the detection options start the food detection by calling the startFoodDetection method of the SDK.
override fun onStart() {
super.onStart()
val options = FoodDetectionConfiguration().apply {
detectBarcodes = true
}
PassioSDK.instance.startFoodDetection(foodListener, options)
}
Stop the food recognition in the onStop() lifecycle callback.
override fun onStop() {
PassioSDK.instance.stopFoodDetection()
super.onStop()
}
const config: FoodDetectionConfig = {
/**
* Detect barcodes on packaged food products. Results will be returned
* as `BarcodeCandidates` in the `FoodCandidates` property of `FoodDetectionEvent`
*/
detectBarcodes: true,
/**
* Results will be returned as DetectedCandidate in the `FoodCandidates`and
* property of `FoodDetectionEvent`
*/
detectPackagedFood: true,
};
useEffect(() => {
if (!isReady) {
return;
}
const subscription = PassioSDK.startFoodDetection(
config,
async (detection: FoodDetectionEvent) => {
const { candidates, nutritionFacts } = detection
if (candidates?.barcodeCandidates?.length) {
// show barcode candidates to the user
} else if (candidates?.packagedFoodCode?.length) {
// show OCR candidates to the user
} else if (candidates?.detectedCandidates?.length) {
// show visually recognized candidates to the user
}
},
);
// stop food detection when component unmounts
return () => subscription.remove();
}, [isReady]);
Add the method startFoodDetection() and register a FoodRecognitionListener
The FoodCandidates object that is returned in the recognition callbacks contains three lists:
detectedCandidates detailing the result of VISUAL detection
barcodeCandidates detailing the result of BARCODE detection
packagedFoodCandidates detailing the result of PACKAGED detection
Only the corresponding candidate lists will be populated (e.g. if you define detection types VISUAL and BARCODE, you will never receive a packagedFoodCandidates list in this callback).
Visual detection
A DetectedCandidate represents the result from running Passio's neural network, specialized for detecting foods like apples, salads, burgers etc. The properties of a detected candidate are:
name
passioID (unique identifier used to query the nutritional databse)
confidence (measure of how accurate is the candidate, ranges from 0 to 1)
boundingBox (a rectangle detailing the bounds of the recognised item within the image dimensions)
alternatives (list of alternative foods that are visually or contextually similar to the recognised food)
croppedImage (the image that the recognition was ran on)
To fetch the full nutrition data of a detected candidate use:
public func fetchFoodItemFor(passioID: PassioNutritionAISDK.PassioID, completion: @escaping (PassioNutritionAISDK.PassioFoodItem?) -> Void)
fun fetchFoodItemForPassioID(
passioID: PassioID,
onResult: (foodItem: PassioFoodItem?) -> Unit
)
Example of an image that produces a DetectedCandidate:
Barcode detection
The SDK can detect barcodes located on the packaging of packaged foods. The BarcodeCandidate has only two properties, the value of the barcode and it's bounding box.
To fetch the full nutritional details of a DetectedCandidate use this api passing the barcode value:
public func fetchFoodItemFor(productCode: String, completion: @escaping ((PassioNutritionAISDK.PassioFoodItem?) -> Void))
fun fetchFoodItemForProductCode(
productCode: String,
onResult: (foodItem: PassioFoodItem?) -> Unit
)
Example of an image that produces multiple BarcodeCandidates:
Product food detection
Passio uses OCR to recognise names of branded foods. The packaged food detection works for users tha point the camera of their devices to the front facing side of a packaged food. The result of that process is a PackagedFoodCandidate that only holds two values, the packaged food code and a measure of confidence.
Use the same api as in #barcode-detectionto retrieve the full nutritional data passing the packaged food code as the parameter.
Example of an image that produces a PackagedFoodCandidate: