在SwiftUI中为图像边框添加描边效果

4

我正在尝试在SwiftUI中重新创建苹果公司印度网站上的节日灯光图片(截图如下)。期望结果:

Apple India Diwali logo

这是我迄今为止所取得的成就:

enter image description here

我的理解是:图片不是形状,所以我们无法描绘它们的边框,但是我发现,shadow()修饰符可以很好地放置在图像边框上。因此,我需要一种方法来自定义阴影并了解它的工作原理。
到目前为止,除了上面的代码之外,我尝试使用Vision框架的轮廓检测将给定的SF符号转换为形状,根据我对这篇文章的理解:https://www.iosdevie.com/p/new-in-ios-14-vision-contour-detection,但没有成功。
请问有人能指导我如何做到这一点,最好只使用SF符号。
2个回答

3

看起来Vision轮廓检测方法并不错。我只是漏掉了一些东西,这些都被@DonMag指出来了。以下是我用SwiftUI最终的答案,如果有人感兴趣的话。

首先,我们创建一个InsettableShape

struct MKSymbolShape: InsettableShape {
    var insetAmount = 0.0
    let systemName: String
    
    var trimmedImage: UIImage {
        let cfg = UIImage.SymbolConfiguration(pointSize: 256.0)
        // get the symbol
        guard let imgA = UIImage(systemName: systemName, withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
            fatalError("Could not load SF Symbol: \(systemName)!")
        }
        
        // we want to "strip" the bounding box empty space
        // get a cgRef from imgA
        guard let cgRef = imgA.cgImage else {
            fatalError("Could not get cgImage!")
        }
        // create imgB from the cgRef
        let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
            .withTintColor(.black, renderingMode: .alwaysOriginal)
        
        // now render it on a white background
        let resultImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
            UIColor.white.setFill()
            ctx.fill(CGRect(origin: .zero, size: imgB.size))
            imgB.draw(at: .zero)
        }
        
        return resultImage
    }
    
    func path(in rect: CGRect) -> Path {
        // cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
        //  so we want to scale the path to our view bounds
        
        let inputImage = self.trimmedImage
        guard let cgPath = detectVisionContours(from: inputImage) else { return Path() }
        let scW: CGFloat = (rect.width - CGFloat(insetAmount)) / cgPath.boundingBox.width
        let scH: CGFloat = (rect.height - CGFloat(insetAmount)) / cgPath.boundingBox.height
        
        // we need to invert the Y-coordinate space
        var transform = CGAffineTransform.identity
            .scaledBy(x: scW, y: -scH)
            .translatedBy(x: 0.0, y: -cgPath.boundingBox.height)
        
        if let imagePath = cgPath.copy(using: &transform) {
            return Path(imagePath)
        } else {
            return Path()
        }
    }
    
    func inset(by amount: CGFloat) -> some InsettableShape {
        var shape = self
        shape.insetAmount += amount
        return shape
    }
    
    func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
        let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
        let contourRequest = VNDetectContoursRequest()
        contourRequest.revision = VNDetectContourRequestRevision1
        contourRequest.contrastAdjustment = 1.0
        contourRequest.maximumImageDimension = 512
        
        let requestHandler = VNImageRequestHandler(ciImage: inputImage, options: [:])
        try! requestHandler.perform([contourRequest])
        if let contoursObservation = contourRequest.results?.first {
            return contoursObservation.normalizedPath
        }
        
        return nil
    }
}

然后我们创建主视图:
struct PreviewView: View {
    var body: some View {
        ZStack {
            LinearGradient(colors: [.black, .purple], startPoint: .top, endPoint: .bottom)
                .edgesIgnoringSafeArea(.all)
            MKSymbolShape(systemName: "applelogo")
                .stroke(LinearGradient(colors: [.yellow, .orange, .pink, .red], startPoint: .top, endPoint: .bottom), style: StrokeStyle(lineWidth: 8, lineCap: .round, dash: [2.0, 21.0]))
                .aspectRatio(CGSize(width: 30, height: 36), contentMode: .fit)
                .padding()
        }
    }
}

最终效果如下图所示: Final look

2
我们可以使用Vision框架和VNDetectContourRequestRevision1来获取cgPath
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
    
    let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
    
    let contourRequest = VNDetectContoursRequest.init()
    contourRequest.revision = VNDetectContourRequestRevision1
    contourRequest.contrastAdjustment = 1.0
    contourRequest.maximumImageDimension = 512
    
    let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
    try! requestHandler.perform([contourRequest])
    if let contoursObservation = contourRequest.results?.first {
        return contoursObservation.normalizedPath
    }
    
    return nil
}

路径将基于一个0,0 1.0,1.0坐标系,因此为了使用它,我们需要将路径缩放到我们想要的大小。它还使用倒置的Y坐标,因此我们还需要翻转它:
    // cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
    //  so we want to scale the path to our view bounds
    let scW: CGFloat = targetRect.bounds.width / cgPth.boundingBox.width
    let scH: CGFloat = targetRect.bounds.height / cgPth.boundingBox.height
    
    // we need to invert the Y-coordinate space
    var transform = CGAffineTransform.identity
        .scaledBy(x: scW, y: -scH)
        .translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
    
    return cgPth.copy(using: &transform)

一些注释...

当使用UIImage(systemName: "applelogo")时,我们得到的是一个具有“字体”特征的图像,即空白字符。参见这个https://dev59.com/dXIOtIcB2Jgan1znLJnL#71743787和这个https://stackoverflow.com/a/66293917/6257435进行讨论。

因此,我们可以直接使用它,但它会使路径缩放和平移变得复杂。

所以,不要使用这个“默认”:

enter image description here

我们可以使用一些代码来“裁剪”空格以获得更可用的图像:

enter image description here

然后我们可以将Vision的路径作为CAShapeLayer的路径,以及这些层属性:.lineCap = .round / .lineWidth = 8 / .lineDashPattern = [2.0, 20.0](例如)来获得“虚线”描边:

enter image description here

然后我们可以使用同样的路径在形状图层上作为渐变图层的掩模:

enter image description here

最后,移除图片视图,这样我们只看到带有遮罩渐变层的视图:

enter image description here

以下是生成该效果的示例代码:

import UIKit
import Vision

class ViewController: UIViewController {
    
    let myOutlineView = UIView()
    let myGradientView = UIView()
    let shapeLayer = CAShapeLayer()
    let gradientLayer = CAGradientLayer()

    let defaultImageView = UIImageView()
    let trimmedImageView = UIImageView()

    var defaultImage: UIImage!
    var trimmedImage: UIImage!
    
    var visionPath: CGPath!

    // an information label
    let infoLabel: UILabel = {
        let v = UILabel()
        v.backgroundColor = UIColor(white: 0.95, alpha: 1.0)
        v.textAlignment = .center
        v.numberOfLines = 0
        return v
    }()

    override func viewDidLoad() {
        super.viewDidLoad()
        
        view.backgroundColor = .systemBlue
        
        // get the system image at 240-points (so we can get a good path from Vision)
        //  experiment with different sizes if the path doesn't appear smooth
        let cfg = UIImage.SymbolConfiguration(pointSize: 240.0)
        
        // get "applelogo" symbol
        guard let imgA = UIImage(systemName: "applelogo", withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
            fatalError("Could not load SF Symbol: applelogo!")
        }
        // now render it on a white background
        self.defaultImage = UIGraphicsImageRenderer(size: imgA.size).image { ctx in
            UIColor.white.setFill()
            ctx.fill(CGRect(origin: .zero, size: imgA.size))
            imgA.draw(at: .zero)
        }

        // we want to "strip" the bounding box empty space
        // get a cgRef from imgA
        guard let cgRef = imgA.cgImage else {
            fatalError("Could not get cgImage!")
        }
        // create imgB from the cgRef
        let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
            .withTintColor(.black, renderingMode: .alwaysOriginal)
        
        // now render it on a white background
        self.trimmedImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
            UIColor.white.setFill()
            ctx.fill(CGRect(origin: .zero, size: imgB.size))
            imgB.draw(at: .zero)
        }

        defaultImageView.image = defaultImage
        defaultImageView.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(defaultImageView)

        trimmedImageView.image = trimmedImage
        trimmedImageView.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(trimmedImageView)
        
        myOutlineView.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(myOutlineView)
        
        myGradientView.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(myGradientView)
        
        // next step button
        let btn = UIButton()
        btn.setTitle("Next Step", for: [])
        btn.setTitleColor(.white, for: .normal)
        btn.setTitleColor(.lightGray, for: .highlighted)
        btn.backgroundColor = .systemRed
        btn.layer.cornerRadius = 8
        
        btn.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(btn)
        
        infoLabel.translatesAutoresizingMaskIntoConstraints = false
        view.addSubview(infoLabel)
        
        let g = view.safeAreaLayoutGuide
        NSLayoutConstraint.activate([
            
            // inset default image view 20-points on each side
            //  height proportional to the image
            //  near the top
            defaultImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
            defaultImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
            defaultImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
            defaultImageView.heightAnchor.constraint(equalTo: defaultImageView.widthAnchor, multiplier: defaultImage.size.height / defaultImage.size.width),
            
            // inset trimmed image view 40-points on each side
            //  height proportional to the image
            //  centered vertically
            trimmedImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 40.0),
            trimmedImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0),
            trimmedImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -40.0),
            trimmedImageView.heightAnchor.constraint(equalTo: trimmedImageView.widthAnchor, multiplier: self.trimmedImage.size.height / self.trimmedImage.size.width),
            
            // add outline view on top of trimmed image view
            myOutlineView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
            myOutlineView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
            myOutlineView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
            myOutlineView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
            
            // add gradient view on top of trimmed image view
            myGradientView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
            myGradientView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
            myGradientView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
            myGradientView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
            
            // button and info label below
            btn.topAnchor.constraint(equalTo: defaultImageView.bottomAnchor, constant: 20.0),
            btn.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
            btn.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),

            infoLabel.topAnchor.constraint(equalTo: btn.bottomAnchor, constant: 20.0),
            infoLabel.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
            infoLabel.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
            infoLabel.heightAnchor.constraint(greaterThanOrEqualToConstant: 60.0),
            
        ])
        
        // setup the shape layer
        shapeLayer.strokeColor = UIColor.red.cgColor
        shapeLayer.fillColor = UIColor.clear.cgColor
        
        // this will give use round dots for the shape layer's stroke
        shapeLayer.lineCap = .round
        shapeLayer.lineWidth = 8
        shapeLayer.lineDashPattern = [2.0, 20.0]
        
        // setup the gradient layer
        let c1: UIColor = .init(red: 0.95, green: 0.73, blue: 0.32, alpha: 1.0)
        let c2: UIColor = .init(red: 0.95, green: 0.25, blue: 0.45, alpha: 1.0)
        gradientLayer.colors = [c1.cgColor, c2.cgColor]
        
        myOutlineView.layer.addSublayer(shapeLayer)
        myGradientView.layer.addSublayer(gradientLayer)

        btn.addTarget(self, action: #selector(nextStep), for: .touchUpInside)
    }
    
    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)
    
        guard let pth = pathSetup()
        else {
            fatalError("Vision could not create path")
        }
        self.visionPath = pth
        
        shapeLayer.path = pth
        
        gradientLayer.frame = myGradientView.bounds.insetBy(dx: -8.0, dy: -8.0)
        let gradMask = CAShapeLayer()
        gradMask.strokeColor = UIColor.red.cgColor
        gradMask.fillColor = UIColor.clear.cgColor
        gradMask.lineCap = .round
        gradMask.lineWidth = 8
        gradMask.lineDashPattern = [2.0, 20.0]
        
        gradMask.path = pth
        gradMask.position.x += 8.0
        gradMask.position.y += 8.0
        gradientLayer.mask = gradMask
        
        nextStep()
    }
    
    var idx: Int = -1
    
    @objc func nextStep() {
        idx += 1
        switch idx % 5 {
        case 1:
            defaultImageView.isHidden = true
            trimmedImageView.isHidden = false
            infoLabel.text = "\"applelogo\" system image - with trimmed empty-space bounding-box."
        case 2:
            myOutlineView.isHidden = false
            shapeLayer.opacity = 1.0
            infoLabel.text = "Dotted outline shape using Vision detected path."
        case 3:
            myOutlineView.isHidden = true
            myGradientView.isHidden = false
            infoLabel.text = "Use Dotted outline shape as a gradient layer mask."
        case 4:
            trimmedImageView.isHidden = true
            view.backgroundColor = .black
            infoLabel.text = "View by itself with Dotted outline shape as a gradient layer mask."
        default:
            view.backgroundColor = .systemBlue
            defaultImageView.isHidden = false
            trimmedImageView.isHidden = true
            myOutlineView.isHidden = true
            myGradientView.isHidden = true
            shapeLayer.opacity = 0.0
            infoLabel.text = "Default \"applelogo\" system image - note empty-space bounding-box."
        }
    }

    func pathSetup() -> CGPath? {
        // get the cgPath from the image
        guard let cgPth = detectVisionContours(from: self.trimmedImage)
        else {
            print("Failed to get path!")
            return nil
        }
        
        // cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
        //  so we want to scale the path to our view bounds
        let scW: CGFloat = myOutlineView.bounds.width / cgPth.boundingBox.width
        let scH: CGFloat = myOutlineView.bounds.height / cgPth.boundingBox.height
        
        // we need to invert the Y-coordinate space
        var transform = CGAffineTransform.identity
            .scaledBy(x: scW, y: -scH)
            .translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
        
        return cgPth.copy(using: &transform)
    }
    
    func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
        
        let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
        
        let contourRequest = VNDetectContoursRequest.init()
        contourRequest.revision = VNDetectContourRequestRevision1
        contourRequest.contrastAdjustment = 1.0
        contourRequest.maximumImageDimension = 512
        
        let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
        try! requestHandler.perform([contourRequest])
        if let contoursObservation = contourRequest.results?.first {
            return contoursObservation.normalizedPath
        }
        
        return nil
    }
}

修剪图像和UIImage的怪癖,这正是我一直缺少的。我已经在下面发布了自己的答案,使用SwiftUI,因为这是我最初想做的。谢谢。 - technusm1
1
@technusm1 - 哎呀...是的,我很少使用SwiftUI,并且完全打算以“需要适应...”开头回答。很高兴它帮助你了 :) - DonMag

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接