Steps for Machine Learning on Apple Silicon M1/M2 chips, with Stable Diffusion
You can use Apple’s ml-stable-diffusion to get started or try my fork with a download_model.py
script for convenience.
Feel free to git clone my fork, or use Apple’s repo, and then run that script.
So far Apple has only converted a few models to Core ML. Check out huggingface.co/coreml to find a lot more models including Stable Diffusion 2.1.
Modify scripts/download_model.py
(source) to choose the model you’d like to download.
Then run it from the command line: python scripts/download_model.py
Check the models/<output>
folder to see the model. If it is a https://huggingface.co/coreml model (models converted to Core ML), then unzip it first.
After downloading the model using the script above, run this in the command line to generate images for a given prompt:
# MODEL=coreml-stable-diffusion-v1-4_original_compiled
# MODEL=coreml-stable-diffusion-v1-5_original_compiled
MODEL=coreml-stable-diffusion-2-1-base_original
# MODEL=coreml-stable-diffusion-2-1-base_split_einsum
# COMPUTE_UNITS=all # "split_einsum" models
COMPUTE_UNITS=cpuAndGPU # "original" models
OUTPUT_PATH=output_images/$MODEL
mkdir -p $OUTPUT_PATH
PROMPT="a photograph of an astronaut riding on a horse"
SEED=42 # 93 is the default
echo "Generating \"$PROMPT\" on $MODEL with seed $SEED"
time swift run StableDiffusionSample $PROMPT --resource-path models/$MODEL --compute-units $COMPUTE_UNITS --output-path $OUTPUT_PATH --seed $SEED